scispace - formally typeset
Search or ask a question
Author

Alan C. Bovik

Bio: Alan C. Bovik is an academic researcher from University of Texas at Austin. The author has contributed to research in topics: Image quality & Video quality. The author has an hindex of 102, co-authored 837 publications receiving 96088 citations. Previous affiliations of Alan C. Bovik include University of Illinois at Urbana–Champaign & University of Sydney.


Papers
More filters
Proceedings ArticleDOI
01 Dec 2000
TL;DR: An unequal error protection technique for foveation-based error resilience over highly error-prone mobile networks is introduced and unequal delay-constrained ARQ and RCPC codes in H.223 Annex C are employed.
Abstract: In this paper, we introduce an unequal error protection technique for foveation-based error resilience over highly error-prone mobile networks. For point-to-point visual communications, visual quality can be significantly increased by using foveation-based error resilience where each frame is divided into foveated and background layers according to the gaze direction of the human eye, and two bitstreams are generated. In an effort to increase the source throughput of the foveated layer, we employ unequal delay-constrained ARQ and RCPC (rate compatible punctured convolutional) codes in H.223 Annex C. In the simulation, the visual quality is increased in the range of 0.3 dB to 1 dB over channel SNR 5 dB to 15 dB.

5 citations

Proceedings ArticleDOI
20 Feb 2009
TL;DR: The wavelet-based approaches are shown to perform well on natural scene images that usually contain regions of distinct textures, and the color saliency approach performs well on images containing objects of high saturation and brightness.
Abstract: In this paper we explore four distinct approaches to extracting regions of interest (ROI) from still images. We show the results obtained for each of the proposed approaches, and we demonstrate where each method outperforms the other. The four approaches are: 1) a block-based discrete wavelet transform (DWT) algorithm, 2) a color saliency approach, 3) a wavelet coefficients variance saliency approach, and 4) an approach based on mean-shift clustering of image pixels. The wavelet-based approaches are shown to perform well on natural scene images that usually contain regions of distinct textures. The color saliency approach performs well on images containing objects of high saturation and brightness, and the mean-shift clustering approach partitions the image into regions according to the density distribution of pixel intensities.

5 citations

Proceedings ArticleDOI
01 Nov 1989
TL;DR: A very fast image coding algorithm that employs the recently-developed visual pattern image coding (VPIC) algorithm embedded in a multi-resolution (pyramid) structure that achieves compression ratios of about 24:1 and in the implementation demonstrated here, requires only 22.8 additions and 3.84 multiplications per image pixel.
Abstract: We present a very fast image coding algorithm that employs the recently-developed visual pattern image coding (VPIC) algorithm embedded in a multi-resolution (pyramid) structure. At each level in the hierarchy, the image is coded by the VPIC algorithm [1]. The low-resolution images coded at the upper levels of the pyramid are used to augment coding of the higher-resolution images. The interaction between the different resolution levels is both simple and computationally efficient, and yields a significant increase in compression ratio relative to simple VPIC with improved image quality and with little increase in complexity. The resulting hierarchical VPIC (HVPIC) algorithm achieves compression ratios of about 24:1 and in the implementation demonstrated here, requires only 22.8 additions and 3.84 multiplications per image pixel.

5 citations

Proceedings ArticleDOI
TL;DR: In this paper, the features from VMAF and GREED are fused in order to exploit the advantages of both models, and the proposed fusion framework results in more efficient features for predicting frame rate dependent video quality.
Abstract: The popularity of streaming videos with live, high-action content has led to an increased interest in High Frame Rate (HFR) videos. In this work we address the problem of frame rate dependent Video Quality Assessment (VQA) when the videos to be compared have different frame rate and compression factor. The current VQA models such as VMAF have superior correlation with perceptual judgments when videos to be compared have same frame rates and contain conventional distortions such as compression, scaling etc. However this framework requires additional pre-processing step when videos with different frame rates need to be compared, which can potentially limit its overall performance. Recently, Generalized Entropic Difference (GREED) VQA model was proposed to account for artifacts that arise due to changes in frame rate, and showed superior performance on the LIVE-YT-HFR database which contains frame rate dependent artifacts such as judder, strobing etc. In this paper we propose a simple extension, where the features from VMAF and GREED are fused in order to exploit the advantages of both models. We show through various experiments that the proposed fusion framework results in more efficient features for predicting frame rate dependent video quality. We also evaluate the fused feature set on standard non-HFR VQA databases and obtain superior performance than both GREED and VMAF, indicating the combined feature set captures complimentary perceptual quality information.

5 citations

Journal ArticleDOI
01 Jan 2022
TL;DR: It is shown that this model achieves comparable performance to state-of-the-art NR image quality models when evaluated on real images afflicted with synthetic distortions, even without using any real images during training.
Abstract: Training deep models using contrastive learning has achieved impressive performances on various computer vision tasks. Since training is done in a self-supervised manner on unlabeled data, contrastive learning is an attractive candidate for applications for which large labeled datasets are hard/expensive to obtain. In this work we investigate the outcomes of using contrastive learning on synthetically generated images for the Image Quality Assessment (IQA) problem. The training data consists of computer generated images corrupted with predetermined distortion types. Predicting distortion type and degree is used as an auxiliary task to learn image quality features. The learned representations are then used to predict quality in a No-Reference (NR) setting on real-world images. We show through extensive experiments that this model achieves comparable performance to state-of-the-art NR image quality models when evaluated on real images afflicted with synthetic distortions, even without using any real images during training. Our results indicate that training with synthetically generated images can also lead to effective, and perceptually relevant representations.

5 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Book
01 Jan 1998
TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.
Abstract: Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.

17,693 citations

Proceedings ArticleDOI
21 Jul 2017
TL;DR: Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Abstract: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.

11,958 citations

Posted Content
TL;DR: Conditional Adversarial Network (CA) as discussed by the authors is a general-purpose solution to image-to-image translation problems, which can be used to synthesize photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Abstract: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.

11,127 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations