scispace - formally typeset
Search or ask a question
Author

Alan C. Bovik

Bio: Alan C. Bovik is an academic researcher from University of Texas at Austin. The author has contributed to research in topics: Image quality & Video quality. The author has an hindex of 102, co-authored 837 publications receiving 96088 citations. Previous affiliations of Alan C. Bovik include University of Illinois at Urbana–Champaign & University of Sydney.


Papers
More filters
Proceedings ArticleDOI
01 Jun 2016
TL;DR: Comparison results show that it is feasible and reliable to apply Random Forests or GBRT for S3D visual discomfort prediction with better performance than SVR.
Abstract: Most stereoscopic 3D (S3D) image visual discomfort predictors use the Support Vector Regressor (SVR) as the regression model. However, there are other good regression models such as the Random Forests (RF) and Gradient Boost Regression Tree (GBRT). Here we study the efficacy of these regression models for S3D image visual discomfort prediction. We deployed several regression models to predict the visual discomfort scores on S3D images using three kinds of features extracted from the images in the IEEE-SA and EPFL S3D image databases. So the prediction performance was evaluated to compare the performance of the different regression models. We also studied the issue of over-fitting which can affect model performance. The comparison results show that it is feasible and reliable to apply Random Forests or GBRT for S3D visual discomfort prediction with better performance than SVR.

3 citations

Journal ArticleDOI
TL;DR: This work considers the problem of learning perceptually relevant video quality representations in a self-supervised manner, and indicates that compelling representations with perceptual bearing can be obtained using self- supervised learning.
Abstract: —Perceptual video quality assessment (VQA) is an integral component of many streaming and video sharing platforms. Here we consider the problem of learning perceptually relevant video quality representations in a self-supervised manner. Distortion type identification and degradation level determination is employed as an auxiliary task to train a deep learning model containing a deep Convolutional Neural Network (CNN) that extracts spatial features, as well as a recurrent unit that captures temporal information. The model is trained using a contrastive loss and we therefore refer to this training framework and resulting model as CONtrastive VIdeo Quality EstimaTor (CONVIQT). During testing, the weights of the trained model are frozen, and a linear regressor maps the learned features to quality scores in a no-reference (NR) setting. We conduct comprehensive evaluations of the proposed model on multiple VQA databases by analyzing the correlations between model predictions and ground-truth quality ratings, and achieve competitive performance when compared to state-of-the-art NR-VQA models, even though it is not trained on those databases. Our ablation experiments demonstrate that the learned representations are highly robust and generalize well across synthetic and realistic distortions. Our results indicate that compelling representations with perceptual bearing can be obtained using self-supervised learning. The implementations used in this work have been made available at https://github.com/pavancm/CONVIQT.

3 citations

Proceedings ArticleDOI
01 Jun 1991
TL;DR: In this paper, an extension of VPISC, termed Foveal VPISC (FVPISC) and Adaptive VPISC(AVPISC), is proposed. But this algorithm does not adaptively determine which regions require high-resolution coding in order to maintain uniform image quality over the entire image.
Abstract: Visual Pattern Image Sequence Coding (VPISC) is a pyramidal image coding scheme which utilizes human visual system (HVS) properties to achieve low bit rates while maintaining a good perceived image quality, all with extremely low computational cost. This paper describes extensions of VPISC, termed Foveal VPISC (FVPISC) and Adaptive VPISC (AVPISC). Both algorithms produce decreased bit rates by selectively allowing some image regions to be coded at low resolution. In FVPISC, a foveation criterion is used to select a region of interest. In AVPISC, the algorithm adaptively determines which regions require high-resolution coding in order to maintain uniform image quality over the entire image.

3 citations

Posted Content
TL;DR: A deep learning approach is proposed that learns to predict a 180-deg panoramic image from a narrow-view image, and a foveated framework that applies different strategies on near-periphery and mid-Periphery regions is designed.
Abstract: Presenting context images to a viewer's peripheral vision is one of the most effective techniques to enhance immersive visual experiences. However, most images only present a narrow view, since the field-of-view (FoV) of standard cameras is small. To overcome this limitation, we propose a deep learning approach that learns to predict a 180° panoramic image from a narrow-view image. Specifically, we design a foveated framework that applies different strategies on near-periphery and mid-periphery regions. Two networks are trained separately, and then are employed jointly to sequentially perform narrow-to-90° generation and 90°-to-180° generation. The generated outputs are then fused with their aligned inputs to produce expanded equirectangular images for viewing. Our experimental results show that single-view-to-panoramic image generation using deep learning is both feasible and promising.

3 citations

Proceedings ArticleDOI
08 Apr 1996
TL;DR: A simple algorithm for fast lossless compression of grayscale images that consists of differential pulse code modulation (DPCM) followed by Huffman coding of the most-likely residual magnitudes and requires significantly less computation than any compression scheme known to the authors.
Abstract: In this paper, we present a simple algorithm for fast lossless compression of grayscale images. It consists of differential pulse code modulation (DPCM) followed by Huffman coding of the most-likely residual magnitudes. Our tests show that it attains higher compression with less computation than the lossless JPEG method. For the large dynamic ranges possible with pixel sizes greater than 8-bits, it requires significantly less computation than any compression scheme known to the authors.

3 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Book
01 Jan 1998
TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.
Abstract: Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.

17,693 citations

Proceedings ArticleDOI
21 Jul 2017
TL;DR: Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Abstract: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.

11,958 citations

Posted Content
TL;DR: Conditional Adversarial Network (CA) as discussed by the authors is a general-purpose solution to image-to-image translation problems, which can be used to synthesize photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Abstract: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.

11,127 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations