scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Interval-Based Least Squares for Uncertainty-Aware Learning in Human-Centric Multimedia Systems

27 Oct 2021-IEEE Transactions on Neural Networks (Institute of Electrical and Electronics Engineers (IEEE))-Vol. 32, Iss: 11, pp 5241-5246
TL;DR: An uncertainty aware loss function that explicitly accounts for data uncertainty and is useful for both optimization (training) and validation and is also relevant in the context of more recent paradigms, such as crowdsourcing, where larger uncertainty in ground truth is expected.
Abstract: Machine learning (ML) methods are popular in several application areas of multimedia signal processing. However, most existing solutions in the said area, including the popular least squares, rely on penalizing predictions that deviate from the target ground-truth values. In other words, uncertainty in the ground-truth data is simply ignored. As a result, optimization and validation overemphasize a single-target value when, in fact, human subjects themselves did not unanimously agree to it. This leads to an unreasonable scenario where the trained model is not allowed the benefit of the doubt in terms of prediction accuracy. The problem becomes even more significant in the context of more recent human-centric and immersive multimedia systems where user feedback and interaction are influenced by higher degrees of freedom (leading to higher levels of uncertainty in the ground truth). To ameliorate this drawback, we propose an uncertainty aware loss function (referred to as $\text {MSE}^{*}$ ) that explicitly accounts for data uncertainty and is useful for both optimization (training) and validation. As examples, we demonstrate the utility of the proposed method for blind estimation of perceptual quality of audiovisual signals, panoramic images, and images affected by camera-induced distortions. The experimental results support the theoretical ideas in terms of reducing prediction errors. The proposed method is also relevant in the context of more recent paradigms, such as crowdsourcing, where larger uncertainty in ground truth is expected.
References
More filters
Proceedings ArticleDOI
13 Aug 2016
TL;DR: XGBoost as discussed by the authors proposes a sparsity-aware algorithm for sparse data and weighted quantile sketch for approximate tree learning to achieve state-of-the-art results on many machine learning challenges.
Abstract: Tree boosting is a highly effective and widely used machine learning method. In this paper, we describe a scalable end-to-end tree boosting system called XGBoost, which is used widely by data scientists to achieve state-of-the-art results on many machine learning challenges. We propose a novel sparsity-aware algorithm for sparse data and weighted quantile sketch for approximate tree learning. More importantly, we provide insights on cache access patterns, data compression and sharding to build a scalable tree boosting system. By combining these insights, XGBoost scales beyond billions of examples using far fewer resources than existing systems.

14,872 citations

Journal ArticleDOI
TL;DR: Despite its simplicity, it is able to show that BRISQUE is statistically better than the full-reference peak signal-to-noise ratio and the structural similarity index, and is highly competitive with respect to all present-day distortion-generic NR IQA algorithms.
Abstract: We propose a natural scene statistic-based distortion-generic blind/no-reference (NR) image quality assessment (IQA) model that operates in the spatial domain. The new model, dubbed blind/referenceless image spatial quality evaluator (BRISQUE) does not compute distortion-specific features, such as ringing, blur, or blocking, but instead uses scene statistics of locally normalized luminance coefficients to quantify possible losses of “naturalness” in the image due to the presence of distortions, thereby leading to a holistic measure of quality. The underlying features used derive from the empirical distribution of locally normalized luminances and products of locally normalized luminances under a spatial natural scene statistic model. No transformation to another coordinate frame (DCT, wavelet, etc.) is required, distinguishing it from prior NR IQA approaches. Despite its simplicity, we are able to show that BRISQUE is statistically better than the full-reference peak signal-to-noise ratio and the structural similarity index, and is highly competitive with respect to all present-day distortion-generic NR IQA algorithms. BRISQUE has very low computational complexity, making it well suited for real time applications. BRISQUE features may be used for distortion-identification as well. To illustrate a new practical application of BRISQUE, we describe how a nonblind image denoising algorithm can be augmented with BRISQUE in order to perform blind image denoising. Results show that BRISQUE augmentation leads to performance improvements over state-of-the-art methods. A software release of BRISQUE is available online: http://live.ece.utexas.edu/research/quality/BRISQUE_release.zip for public use and evaluation.

3,780 citations

Journal ArticleDOI
TL;DR: This work has recently derived a blind IQA model that only makes use of measurable deviations from statistical regularities observed in natural images, without training on human-rated distorted images, and, indeed, without any exposure to distorted images.
Abstract: An important aim of research on the blind image quality assessment (IQA) problem is to devise perceptual models that can predict the quality of distorted images with as little prior knowledge of the images or their distortions as possible. Current state-of-the-art “general purpose” no reference (NR) IQA algorithms require knowledge about anticipated distortions in the form of training examples and corresponding human opinion scores. However we have recently derived a blind IQA model that only makes use of measurable deviations from statistical regularities observed in natural images, without training on human-rated distorted images, and, indeed without any exposure to distorted images. Thus, it is “completely blind.” The new IQA model, which we call the Natural Image Quality Evaluator (NIQE) is based on the construction of a “quality aware” collection of statistical features based on a simple and successful space domain natural scene statistic (NSS) model. These features are derived from a corpus of natural, undistorted images. Experimental results show that the new index delivers performance comparable to top performing NR IQA models that require training on large databases of human opinions of distorted images. A software release is available at http://live.ece.utexas.edu/research/quality/niqe_release.zip.

3,722 citations

Book
31 Aug 1989
TL;DR: This new fourth edition of Peter Lees book looks at recent techniques such as variational methods, Bayesian importance sampling, approximate Bayesian computation and Reversible Jump Markov Chain Monte Carlo, providing a concise account of the way in which the Bayesian approach to statistics develops as well as how it contrasts with the conventional approach.
Abstract: Bayesian Statistics is the school of thought that combines prior beliefs with the likelihood of a hypothesis to arrive at posterior beliefs. The first edition of Peter Lees book appeared in 1989, but the subject has moved ever onwards, with increasing emphasis on Monte Carlo based techniques.This new fourth edition looks at recent techniques such as variational methods, Bayesian importance sampling, approximate Bayesian computation and Reversible Jump Markov Chain Monte Carlo (RJMCMC), providing a concise account of the way in which the Bayesian approach to statistics develops as well as how it contrasts with the conventional approach. The theory is built up step by step, and important notions such as sufficiency are brought out of a discussion of the salient features of specific examples.This edition:Includes expanded coverage of Gibbs sampling, including more numerical examples and treatments of OpenBUGS, R2WinBUGS and R2OpenBUGS.Presents significant new material on recent techniques such as Bayesian importance sampling, variational Bayes, Approximate Bayesian Computation (ABC) and Reversible Jump Markov Chain Monte Carlo (RJMCMC).Provides extensive examples throughout the book to complement the theory presented.Accompanied by a supporting website featuring new material and solutions.More and more students are realizing that they need to learn Bayesian statistics to meet their academic and professional goals. This book is best suited for use as a main text in courses on Bayesian statistics for third and fourth year undergraduates and postgraduate students.

1,407 citations

Book
21 Jul 2016
TL;DR: This book takes an exhilarating journey through the revolution in data analysis following the introduction of electronic computation in the 1950s, with speculation on the future direction of statistics and data science.
Abstract: The twenty-first century has seen a breathtaking expansion of statistical methodology, both in scope and in influence. 'Big data', 'data science', and 'machine learning' have become familiar terms in the news, as statistical methods are brought to bear upon the enormous data sets of modern science and commerce. How did we get here? And where are we going? This book takes us on an exhilarating journey through the revolution in data analysis following the introduction of electronic computation in the 1950s. Beginning with classical inferential theories - Bayesian, frequentist, Fisherian - individual chapters take up a series of influential topics: survival analysis, logistic regression, empirical Bayes, the jackknife and bootstrap, random forests, neural networks, Markov chain Monte Carlo, inference after model selection, and dozens more. The distinctly modern approach integrates methodology and algorithms with statistical inference. The book ends with speculation on the future direction of statistics and data science.

323 citations