scispace - formally typeset
Search or ask a question
Author

Alan C. Bovik

Bio: Alan C. Bovik is an academic researcher from University of Texas at Austin. The author has contributed to research in topics: Image quality & Video quality. The author has an hindex of 102, co-authored 837 publications receiving 96088 citations. Previous affiliations of Alan C. Bovik include University of Illinois at Urbana–Champaign & University of Sydney.


Papers
More filters
Journal ArticleDOI
TL;DR: A proxy network is constructed, broadly termed ProxIQA, which mimics the perceptual model while serving as a loss layer of the network and is able to demonstrate a bitrate reduction of as much as 31% over MSE optimization, given a specified perceptual quality (VMAF) level.
Abstract: The use of $\ell _{p}$ (p = 1,2) norms has largely dominated the measurement of loss in neural networks due to their simplicity and analytical properties. However, when used to assess the loss of visual information, these simple norms are not very consistent with human perception. Here, we describe a different “proximal” approach to optimize image analysis networks against quantitative perceptual models. Specifically, we construct a proxy network, broadly termed ProxIQA, which mimics the perceptual model while serving as a loss layer of the network. We experimentally demonstrate how this optimization framework can be applied to train an end-to-end optimized image compression network. By building on top of an existing deep image compression model, we are able to demonstrate a bitrate reduction of as much as 31% over MSE optimization, given a specified perceptual quality (VMAF) level.

52 citations

Proceedings ArticleDOI
05 Feb 2014
TL;DR: A novel natural-scene-statistics-based blind image quality assessment model that is created by training a deep belief net to discover good feature representations that are used to learn a regressor for quality prediction is presented.
Abstract: We present a novel natural-scene-statistics-based blind image quality assessment model that is created by training a deep belief net to discover good feature representations that are used to learn a regressor for quality prediction. The proposed deep model has an unsupervised pre-training stage followed by a supervised fine-tuning stage, enabling it to generalize over different distortion types, mixtures, and severities. We evaluated our new model on a recently created database of images afflicted by real distortions, and show that it outperforms current state-of-the-art blind image quality prediction models.

52 citations

Journal ArticleDOI
TL;DR: A model that expresses the joint impact of spatial resolution and JPEG compression quality factor andtex-math notation="LaTeX" on immersive image quality and high Pearson correlation and Spearman correlation are developed.
Abstract: We develop a model that expresses the joint impact of spatial resolution $s$ and JPEG compression quality factor $q^{f}$ on immersive image quality. The model is expressed as the product of optimized exponential functions of these factors. The model is tested on a subjective database of immersive image contents rendered on a head mounted display. High Pearson correlation and Spearman correlation (>0.95) and small relative root mean squared error (<5.6%) are achieved between the model predictions and the subjective quality judgements. The immersive ground-truth images along with the rest of the database are made available for future research and comparisons.

51 citations

Journal ArticleDOI
TL;DR: It is revealed that the use of the different types of aesthetic labels can be developed within the same statistical framework, which is used to create a unified probabilistic formulation of all the three IAA tasks.
Abstract: Image aesthetic assessment (IAA) has been attracting considerable attention in recent years due to the explosive growth of digital photography in Internet and social networks. The IAA problem is inherently challenging, owning to the ineffable nature of the human sense of aesthetics and beauty, and its close relationship to understanding pictorial content. Three different approaches to framing and solving the problem have been posed: binary classification, average score regression and score distribution prediction. Solutions that have been proposed have utilized different types of aesthetic labels and loss functions to train deep IAA models. However, these studies ignore the fact that the three different IAA tasks are inherently related. Here, we reveal that the use of the different types of aesthetic labels can be developed within the same statistical framework, which we use to create a unified probabilistic formulation of all the three IAA tasks. This unified formulation motivates the use of an efficient and effective loss function for training deep IAA models to conduct different tasks. We also discuss the problem of learning from a noisy raw score distribution which hinders network performance. We then show that by fitting the raw score distribution to a more stable and discriminative score distribution, we are able to train a single model which is able to obtain highly competitive performance on all three IAA tasks. Extensive qualitative analysis and experimental results on image aesthetic benchmarks validate the superior performance afforded by the proposed formulation. The source code is available at https://github.com/HuiZeng/Unified_IAA .

51 citations

Journal ArticleDOI
TL;DR: This paper introduces VPISC, a new digital image sequence (video) coding process that possesses significant advantages relative to other technologies; in particular, it is extremely efficient in terms of the computational effort required.
Abstract: Visual pattern image sequence coding (VPISC) is a new digital image sequence (video) coding process that possesses significant advantages relative to other technologies; in particular, it is extremely efficient in terms of the computational effort required. It is designed to exploit properties of the human visual system (HVS), and thus yields high visual fidelity. Visual quality criteria are deliberately chosen over information-theoretic ones on the grounds that, in images intended for human viewing, visual criteria are the most meaningful ones. VPISC yields impressive compression comparable to other recent methods, such as motion-compensated vector quantization. VPISC divides the images into spatiotemporal cubes, which are then independently matched with one of a small, predetermined set of visually meaningful three-dimensional space-time patterns. The pattern set is chosen to conform to specific characteristics of the HVS. Also introduced are two modifications of VPISC: adaptive and foveal VPISC. These are spatiotemporally nonuniform implementations that code different portions of the image sequence at different resolutions, according to either a fidelity criterion (for AVPISC) or a foveation criterion (for FVPISC). >

50 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Book
01 Jan 1998
TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.
Abstract: Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.

17,693 citations

Proceedings ArticleDOI
21 Jul 2017
TL;DR: Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Abstract: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.

11,958 citations

Posted Content
TL;DR: Conditional Adversarial Network (CA) as discussed by the authors is a general-purpose solution to image-to-image translation problems, which can be used to synthesize photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Abstract: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.

11,127 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations