scispace - formally typeset
Search or ask a question
Author

Alan C. Bovik

Bio: Alan C. Bovik is an academic researcher from University of Texas at Austin. The author has contributed to research in topics: Image quality & Video quality. The author has an hindex of 102, co-authored 837 publications receiving 96088 citations. Previous affiliations of Alan C. Bovik include University of Illinois at Urbana–Champaign & University of Sydney.


Papers
More filters
Proceedings ArticleDOI
01 Nov 2014
TL;DR: A new image quality database that models diverse authentic image distortions and artifacts that affect images that are captured using modern mobile devices and a new online crowdsourcing system, which is using to conduct a very large-scale, on-going, multi-month image quality assessment (IQA) subjective study.
Abstract: We designed and created a new image quality database that models diverse authentic image distortions and artifacts that affect images that are captured using modern mobile devices. We also designed and implemented a new online crowdsourcing system, which we are using to conduct a very large-scale, on-going, multi-month image quality assessment (IQA) subjective study, wherein a wide range of diverse observers record their judgments of image quality. Our database currently consists of over 320,000 opinion scores on 1,163 authentically distorted images evaluated by over 7000 human observers. The new database will soon be made freely available for download and we envision that the fruits of our efforts will provide researchers with a valuable tool to benchmark and improve the performance of objective IQA algorithms.

30 citations

Proceedings ArticleDOI
19 Apr 2009
TL;DR: In this paper, a new quality metric for range images based on the multi-scale Structural Similarity (MS-SSIM) Index is proposed, which operates in a manner to SSIM but allows for special handling of missing data.
Abstract: We propose a new quality metric for range images that is based on the multi-scale Structural Similarity (MS-SSIM) Index. The new metric operates in a manner to SSIM but allows for special handling of missing data. We demonstrate its utility by reevaluating the set of stereo algorithms evaluated in the Middlebury Stereo Vision Page http://vision.middlebury.edu/stereo/. The new algorithm which we term Range SSIM (R-SSIM) Index possesses features that make it an attractive choice for assessing the quality of range images.

30 citations

Journal ArticleDOI
TL;DR: An objective VQA model called Space-Time GeneRalized Entropic Difference (GREED) is devised which analyzes the statistics of spatial and temporal band-pass video coefficients and achieves state-of-the-art performance on the LIVE-YT-HFR Database when compared with existing V QA models.
Abstract: We consider the problem of conducting frame rate dependent video quality assessment (VQA) on videos of diverse frame rates, including high frame rate (HFR) videos. More generally, we study how perceptual quality is affected by frame rate, and how frame rate and compression combine to affect perceived quality. We devise an objective VQA model called Space-Time GeneRalized Entropic Difference (GREED) which analyzes the statistics of spatial and temporal band-pass video coefficients. A generalized Gaussian distribution (GGD) is used to model band-pass responses, while entropy variations between reference and distorted videos under the GGD model are used to capture video quality variations arising from frame rate changes. The entropic differences are calculated across multiple temporal and spatial subbands, and merged using a learned regressor. We show through extensive experiments that GREED achieves state-of-the-art performance on the LIVE-YT-HFR Database when compared with existing VQA models. The features used in GREED are highly generalizable and obtain competitive performance even on standard, non-HFR VQA databases. The implementation of GREED has been made available online: this https URL

30 citations

Journal ArticleDOI
TL;DR: The proposed algorithm, which is completely blind (requiring no reference videos or training on subjective scores) is called the Motion and Disparity-based 3D video quality evaluator (MoDi3D), which delivers competitive performance over a wide variety of datasets, including the IRCCYN dataset, the WaterlooIVC Phase I datasets, the LFOVIA dataset, and the proposed LFOVIAS3DPh2 S3D video dataset.
Abstract: We present a new subjective and objective study on full high-definition (HD) stereoscopic (3D or S3D) video quality. In the subjective study, we constructed an S3D video dataset with 12 pristine and 288 test videos, and the test videos are generated by applying the H.264 and H.265 compression, blur, and frame freeze artifacts. We also propose a no reference (NR) objective video quality assessment (QA) algorithm that relies on measurements of the statistical dependencies between the motion and disparity subband coefficients of S3D videos. Inspired by the Generalized Gaussian Distribution (GGD) approach, we model the joint statistical dependencies between the motion and disparity components as following a Bivariate Generalized Gaussian Distribution (BGGD). We estimate the BGGD model parameters ( $\alpha,\,\beta $ ) and the coherence measure ( $\Psi $ ) from the eigenvalues of the sample covariance matrix (M) of the BGGD. In turn, we model the BGGD parameters of pristine S3D videos using a Multivariate Gaussian (MVG) distribution. The likelihood of a test video’s MVG model parameters coming from the pristine MVG model is computed and shown to play a key role in the overall quality estimation. We also estimate the global motion content of each video by averaging the SSIM scores between pairs of successive video frames. To estimate the test S3D video’s spatial quality, we apply the popular 2D NR unsupervised NIQE image QA model on a frame-by-frame basis on both views. The overall quality of a test S3D video is finally computed by pooling the test S3D video’s likelihood estimates, global motion strength, and spatial quality scores. The proposed algorithm, which is completely blind (requiring no reference videos or training on subjective scores) is called the Motion and Disparity-based 3D video quality evaluator (MoDi3D). We show that MoDi3D delivers competitive performance over a wide variety of datasets, including the IRCCYN dataset, the WaterlooIVC Phase I dataset, the LFOVIA dataset, and our proposed LFOVIAS3DPh2 S3D video dataset.

30 citations

Journal ArticleDOI
TL;DR: This work proposes a new general formulation for nonlinear set-theoretic image estimation based on a flexible constraint framework that encapsulates meaningful structural image assumptions and demonstrates high quality image estimation as measured by local feature integrity, and improvement in SNR.
Abstract: We introduce a new approach to image estimation based on a flexible constraint framework that encapsulates meaningful structural image assumptions. Piecewise image models (PIMs) and local image models (LIMs) are defined and utilized to estimate noise-corrupted images, PIMs and LIMs are defined by image sets obeying certain piecewise or local image properties, such as piecewise linearity, or local monotonicity. By optimizing local image characteristics imposed by the models, image estimates are produced with respect to the characteristic sets defined by the models. Thus, we propose a new general formulation for nonlinear set-theoretic image estimation. Detailed image estimation algorithms and examples are given using two PIMs: piecewise constant (PICO) and piecewise linear (PILI) models, and two LIMs: locally monotonic (LOMO) and locally convex/concave (LOCO) models. These models define properties that hold over local image neighborhoods, and the corresponding image estimates may be inexpensively computed by iterative optimization algorithms. Forcing the model constraints to hold at every image coordinate of the solution defines a nonlinear regression problem that is generally nonconvex and combinatorial. However, approximate solutions may be computed in reasonable time using the novel generalized deterministic annealing (GDA) optimization technique, which is particularly well suited for locally constrained problems of this type. Results are given for corrupted imagery with signal-to-noise ratio (SNR) as low as 2 dB, demonstrating high quality image estimation as measured by local feature integrity, and improvement in SNR.

30 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Book
01 Jan 1998
TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.
Abstract: Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.

17,693 citations

Proceedings ArticleDOI
21 Jul 2017
TL;DR: Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Abstract: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.

11,958 citations

Posted Content
TL;DR: Conditional Adversarial Network (CA) as discussed by the authors is a general-purpose solution to image-to-image translation problems, which can be used to synthesize photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Abstract: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.

11,127 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations