scispace - formally typeset
Search or ask a question
Author

Lukas Krasula

Bio: Lukas Krasula is an academic researcher from University of Nantes. The author has contributed to research in topics: Image quality & Video quality. The author has an hindex of 10, co-authored 40 publications receiving 304 citations. Previous affiliations of Lukas Krasula include Czech Technical University in Prague & Netflix.

Papers
More filters
Proceedings ArticleDOI
06 Jun 2016
TL;DR: New methodology for objective models performance evaluation is proposed, based on determining the classification abilities of the models considering two scenarios inspired by the real applications, which enables to easily evaluate the performance on the data from multiple subjective experiments.
Abstract: There are several standard methods for evaluating the performance of models for objective quality assessment with respect to results of subjective tests. However, all of them suffer from one or more of the following drawbacks: They do not consider the uncertainty in the subjective scores, requiring the models to make certain decision where the correct behavior is not known. They are vulnerable to the quality range of the stimuli in the experiments. In order to compare the models, they require a mapping of predicted values to the subjective scores, thus not comparing the models exactly as they are used in the real scenarios. In this paper, new methodology for objective models performance evaluation is proposed. The method is based on determining the classification abilities of the models considering two scenarios inspired by the real applications. It does not suffer from the previously stated drawbacks and enables to easily evaluate the performance on the data from multiple subjective experiments. Moreover, techniques to determine statistical significance of the performance differences are suggested. The proposed framework is tested on several selected metrics and datasets, showing the ability to provide a complementary information about the models' behavior while being in parallel with other state-of-the-art methods.

86 citations

Journal ArticleDOI
TL;DR: A framework specifically adapted for the QA of sharpened images and objective metrics comparison in this context is introduced and the problem of selecting the correct procedure for subjective evaluation was addressed and a subjective test was performed in order to demonstrate the use of the framework.
Abstract: Most of the effort in image quality assessment (QA) has been so far dedicated to the degradation of the image. However, there are also many algorithms in the image processing chain that can enhance the quality of an input image. These include procedures for contrast enhancement, deblurring, sharpening, up-sampling, denoising, transfer function compensation, and so on. In this paper, possible strategies for the QA of sharpened images are investigated. This task is not trivial, because the sharpening techniques can increase the perceived quality, as well as introduce artifacts leading to the quality drop (over-sharpening). Here, the framework specifically adapted for the QA of sharpened images and objective metrics comparison in this context is introduced. However, the framework can be adopted in other QA areas as well. The problem of selecting the correct procedure for subjective evaluation was addressed and a subjective test on blurred, sharpened, and over-sharpened images was performed in order to demonstrate the use of the framework. The obtained ground-truth data were used for testing the suitability of the state-of-the-art objective quality metrics for the assessment of sharpened images. The comparison was performed by novel procedure using rank order correlation analyses, which is found more appropriate for the task than standard methods. Furthermore, seven possible augmentations of the no-reference S3 metric adapted for sharpened images are proposed. The performance of the metric is significantly improved and also superior over the rest of the tested quality criteria with respect to the subjective data.

50 citations

Journal ArticleDOI
TL;DR: Objective video quality metrics are designed to estimate the quality of experience of the end user, but these objective metrics are usually validated with video streams degraded under common conditions.
Abstract: Objective video quality metrics are designed to estimate thequality of experience of the end user. However, these objectivemetrics are usually validated with video streams degraded undercommon dist ...

24 citations

Proceedings ArticleDOI
20 Nov 2014
TL;DR: It is demonstrated that profiles A and B lead to similar saturation of quality at the higher bit rates, while profile C exhibits no saturation, while Profiles B and C appear to be more dependent on TMOs used for the base layer compared to profile A.
Abstract: The upcoming JPEG XT is under development for High Dynamic Range (HDR) image compression. This standard encodes a Low Dynamic Range (LDR) version of the HDR image generated by a Tone-Mapping Operator (TMO) using the conventional JPEG coding as a base layer and encodes the extra HDR information in a residual layer. This paper studies the performance of the three profiles of JPEG XT (referred to as profiles A, B and C) using a test set of six HDR images. Four TMO techniques were used for the base layer image generation to assess the influence of the TMOs on the performance of JPEG XT profiles. Then, the HDR images were coded with different quality levels for the base layer and for the residual layer. The performance of each profile was evaluated using Signal to Noise Ratio (SNR), Feature SIMilarity Index (FSIM), Root Mean Square Error (RMSE), and CIEDE2000 color difference objective metrics. The evaluation results demonstrate that profiles A and B lead to similar saturation of quality at the higher bit rates, while profile C exhibits no saturation. Profiles B and C appear to be more dependent on TMOs used for the base layer compared to profile A.

23 citations

Journal ArticleDOI
TL;DR: A subjective experiment attempting to determine users’ preference with respect to these two types of content in two different viewing scenarios—with and without the HDR reference shows that the absence of the reference can significantly influence the subjects' preferences for the natural images, while no significant impact has been found in the case of the synthetic images.
Abstract: The popularity of high dynamic range (HDR) imaging has grown in both academic and private research sectors. Since the native visualization of HDR content still has its limitations, the importance of dynamic range compression (i.e., tone-mapping) is very high. This paper evaluates observers’ preference of experience in context of image tone-mapping. Given the different nature of natural and computer-generated content, the way observers perceive the quality of tone-mapped images can be fundamentally different. In this paper, we describe a subjective experiment attempting to determine users’ preference with respect to these two types of content in two different viewing scenarios—with and without the HDR reference. The results show that the absence of the reference can significantly influence the subjects’ preferences for the natural images, while no significant impact has been found in the case of the synthetic images. Moreover, we introduce a benchmarking framework and compare the performance of selected objective metrics. The resulting dataset and framework are made publicly available to provide a common test bed and methodology for evaluating metrics in the considered scenario.

22 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Comparing different image quality metrics to give a comprehensive view of structural and feature similarity measures between restored and original objects on the basis of perception is mainly stressed.
Abstract: Quality is a very important parameter for all objects and their functionalities. In image-based object recognition, image quality is a prime criterion. For authentic image quality evaluation, ground truth is required. But in practice, it is very difficult to find the ground truth. Usually, image quality is being assessed by full reference metrics, like MSE (Mean Square Error) and PSNR (Peak Signal to Noise Ratio). In contrast to MSE and PSNR, recently, two more full reference metrics SSIM (Structured Similarity Indexing Method) and FSIM (Feature Similarity Indexing Method) are developed with a view to compare the structural and feature similarity measures between restored and original objects on the basis of perception. This paper is mainly stressed on comparing different image quality metrics to give a comprehensive view. Experimentation with these metrics using benchmark images is performed through denoising for different noise concentrations. All metrics have given consistent results. However, from representation perspective, SSIM and FSIM are normalized, but MSE and PSNR are not; and from semantic perspective, MSE and PSNR are giving only absolute error; on the other hand, SSIM and PSNR are giving perception and saliency-based error. So, SSIM and FSIM can be treated more understandable than the MSE and PSNR.

507 citations

Journal ArticleDOI
TL;DR: This survey provides a general overview of classical algorithms and recent progresses in the field of perceptual image quality assessment and describes the performances of the state-of-the-art quality measures for visual signals.
Abstract: Perceptual quality assessmentplays a vital role in the visual communication systems owing to theexistence of quality degradations introduced in various stages of visual signalacquisition, compression, transmission and display.Quality assessment for visual signals can be performed subjectively andobjectively, and objective quality assessment is usually preferred owing to itshigh efficiency and easy deployment. A large number of subjective andobjective visual quality assessment studies have been conducted during recent years.In this survey, we give an up-to-date and comprehensivereview of these studies.Specifically, the frequently used subjective image quality assessment databases are firstreviewed, as they serve as the validation set for the objective measures.Second, the objective image quality assessment measures are classified and reviewed according to the applications and the methodologies utilized in the quality measures.Third, the performances of the state-of-the-artquality measures for visual signals are compared with an introduction of theevaluation protocols.This survey provides a general overview of classical algorithms andrecent progresses in the field of perceptual image quality assessment.

281 citations

Journal ArticleDOI
TL;DR: This research concludes that SSIM is a better measure of imperceptibility in all aspects and it is preferable that in the next steganographic research at least use SSIM.
Abstract: Peak signal to noise ratio (PSNR) and structural index similarity (SSIM) are two measuring tools that are widely used in image quality assessment. Especially in the steganography image, these two measuring instruments are used to measure the quality of imperceptibility. PSNR is used earlier than SSIM, is easy, has been widely used in various digital image measurements, and has been considered tested and valid. SSIM is a newer measurement tool that is designed based on three factors i.e. luminance, contrast, and structure to better suit the workings of the human visual system. Some research has discussed the correlation and comparison of these two measuring tools, but no research explicitly discusses and suggests which measurement tool is more suitable for steganography. This study aims to review, prove, and analyze the results of PSNR and SSIM measurements on three spatial domain image steganography methods, i.e. LSB, PVD, and CRT. Color images were chosen as container images because human vision is more sensitive to color changes than grayscale changes. Based on the test results found several opposing findings, where LSB has the most superior value based on PSNR and PVD get the most superior value based on SSIM. Additionally, the changes based on the histogram are more noticeable in LSB and CRT than in PVD. Other analyzes such as RS attack also show results that are more in line with SSIM measurements when compared to PSNR. Based on the results of testing and analysis, this research concludes that SSIM is a better measure of imperceptibility in all aspects and it is preferable that in the next steganographic research at least use SSIM.

204 citations

Journal ArticleDOI
TL;DR: This study improved the performance of the road extraction network by integrating atrous spatial pyramid pooling (ASPP) with an Encoder-Decoder network and utilized the structural similarity (SSIM) as a loss function for road extraction.
Abstract: The technology used for road extraction from remote sensing images plays an important role in urban planning, traffic management, navigation, and other geographic applications. Although deep learning methods have greatly enhanced the development of road extractions in recent years, this technology is still in its infancy. Because the characteristics of road targets are complex, the accuracy of road extractions is still limited. In addition, the ambiguous prediction of semantic segmentation methods also makes the road extraction result blurry. In this study, we improved the performance of the road extraction network by integrating atrous spatial pyramid pooling (ASPP) with an Encoder-Decoder network. The proposed approach takes advantage of ASPP’s ability to extract multiscale features and the Encoder-Decoder network’s ability to extract detailed features. Therefore, it can achieve accurate and detailed road extraction results. For the first time, we utilized the structural similarity (SSIM) as a loss function for road extraction. Therefore, the ambiguous predictions in the extraction results can be removed, and the image quality of the extracted roads can be improved. The experimental results using the Massachusetts Road dataset show that our method achieves an F1-score of 83.5% and an SSIM of 0.893. Compared with the normal U-net, our method improves the F1-score by 2.6% and the SSIM by 0.18. Therefore, it is demonstrated that the proposed approach can extract roads from remote sensing images more effectively and clearly than the other compared methods.

101 citations