scispace - formally typeset
Open AccessJournal ArticleDOI

Massive Online Crowdsourced Study of Subjective and Objective Picture Quality

Reads0
Chats0
TLDR
The LIVE In the Wild Image Quality Challenge Database as discussed by the authors contains widely diverse authentic image distortions on a large number of images captured using a representative variety of modern mobile devices and has been used to conduct a very large-scale, multi-month image quality assessment subjective study.
Abstract
Most publicly available image quality databases have been created under highly controlled conditions by introducing graded simulated distortions onto high-quality photographs. However, images captured using typical real-world mobile camera devices are usually afflicted by complex mixtures of multiple distortions, which are not necessarily well-modeled by the synthetic distortions found in existing databases. The originators of existing legacy databases usually conducted human psychometric studies to obtain statistically meaningful sets of human opinion scores on images in a stringently controlled visual environment, resulting in small data collections relative to other kinds of image analysis databases. Towards overcoming these limitations, we designed and created a new database that we call the LIVE In the Wild Image Quality Challenge Database, which contains widely diverse authentic image distortions on a large number of images captured using a representative variety of modern mobile devices. We also designed and implemented a new online crowdsourcing system, which we have used to conduct a very large-scale, multi-month image quality assessment subjective study. Our database consists of over 350000 opinion scores on 1162 images evaluated by over 7000 unique human observers. Despite the lack of control over the experimental environments of the numerous study participants, we demonstrate excellent internal consistency of the subjective dataset. We also evaluate several top-performing blind Image Quality Assessment algorithms on it and present insights on how mixtures of distortions challenge both end users as well as automatic perceptual quality prediction models.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

The Unreasonable Effectiveness of Deep Features as a Perceptual Metric

TL;DR: A new dataset of human perceptual similarity judgments is introduced and it is found that deep features outperform all previous metrics by large margins on this dataset, and suggests that perceptual similarity is an emergent property shared across deep visual representations.
Journal ArticleDOI

Waterloo Exploration Database: New Challenges for Image Quality Assessment Models

TL;DR: This work establishes a large-scale database named the Waterloo Exploration Database, which in its current state contains 4744 pristine natural images and 94 880 distorted images created from them, and presents three alternative test criteria to evaluate the performance of IQA models, namely, the pristine/distorted image discriminability test, the listwise ranking consistency test, and the pairwise preference consistency test.
Journal ArticleDOI

Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment

TL;DR: A deep neural network-based approach to image quality assessment (IQA) that allows for joint learning of local quality and local weights in an unified framework and shows a high ability to generalize between different databases, indicating a high robustness of the learned features.
Journal ArticleDOI

End-to-End Blind Image Quality Assessment Using Deep Neural Networks

TL;DR: This work demonstrates the strong competitiveness of MEON against state-of-the-art BIQA models using the group maximum differentiation competition methodology and empirically demonstrates that GDN is effective at reducing model parameters/layers while achieving similar quality prediction performance.
Journal ArticleDOI

Blind Image Quality Assessment Using a Deep Bilinear Convolutional Neural Network

TL;DR: A deep bilinear model for blind image quality assessment that works for both synthetically and authentically distorted images and achieves state-of-the-art performance on both synthetic and authentic IQA databases is proposed.
References
More filters
Journal ArticleDOI

Image quality assessment: from error visibility to structural similarity

TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Journal ArticleDOI

A fast learning algorithm for deep belief nets

TL;DR: A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.
Proceedings ArticleDOI

Multiscale structural similarity for image quality assessment

TL;DR: This paper proposes a multiscale structural similarity method, which supplies more flexibility than previous single-scale methods in incorporating the variations of viewing conditions, and develops an image synthesis method to calibrate the parameters that define the relative importance of different scales.
Journal ArticleDOI

No-Reference Image Quality Assessment in the Spatial Domain

TL;DR: Despite its simplicity, it is able to show that BRISQUE is statistically better than the full-reference peak signal-to-noise ratio and the structural similarity index, and is highly competitive with respect to all present-day distortion-generic NR IQA algorithms.
Journal ArticleDOI

Making a “Completely Blind” Image Quality Analyzer

TL;DR: This work has recently derived a blind IQA model that only makes use of measurable deviations from statistical regularities observed in natural images, without training on human-rated distorted images, and, indeed, without any exposure to distorted images.
Related Papers (5)