scispace - formally typeset
Search or ask a question
Author

Alan C. Bovik

Bio: Alan C. Bovik is an academic researcher from University of Texas at Austin. The author has contributed to research in topics: Image quality & Video quality. The author has an hindex of 102, co-authored 837 publications receiving 96088 citations. Previous affiliations of Alan C. Bovik include University of Illinois at Urbana–Champaign & University of Sydney.


Papers
More filters
Proceedings ArticleDOI
01 Dec 2017
TL;DR: This work investigates the use of automated Video Quality Assessment (VQA) algorithms to evaluate digital video collections driven by well-defined natural scene statistics (NSS), which capture the behavior of natural distortion-free videos.
Abstract: We investigate the use of automated Video Quality Assessment (VQA) algorithms to evaluate digital video collections. These algorithms are driven by well-defined natural scene statistics (NSS), which capture the behavior of natural distortion-free videos. Because human vision has adapted to these real-world statistics over the course of evolution, quality predictions delivered by these NSS-based VQA algorithms correlate well with human opinions of quality. In particular, we expect these algorithms to accurately predict quality on sizable and diverse video collections. To test this hypothesis, we gathered a testbed of video clips that represent a larger video art collection. Next, we conducted a human study in which users scored the quality of the clips. Enabled by the human study, we trained three VQA algorithms (Video BLIINDS, BRISQUE, and VIIDEO) using our testbed collection to assess a real-world digital video art collection from our university museum. Two of the algorithms provided good automatic predictions of the quality of the videos. These same algorithms also highlighted limitations that arise when assessing artistic collections. We present current research progress and discuss future directions for testbed and algorithm improvement. Our ongoing effort furthers the field of Computational Archival Science by applying computational models of human perception to video appraisal and preservation tasks.

2 citations

Proceedings ArticleDOI
01 Oct 1990
TL;DR: This paper studies the computation of surface orientation by analyzing the responses of multiple spatio-spectrally localized channel filters using Gaborfunctions, which have previously been applied successfully to related problems in texture analysis, segmentation, and characterization.
Abstract: This paper studies the computation of surface orientation by analyzing the responses of multiple spatio-spectrally localized channel filters. Images containing textures that encode information about local surface orientation are decomposed into narrowband sub-images possessing characteristic radial frequency and orientation properties. By analyzing the spatial variation in the filter responses, information about the spatial variation in the pattern I texture can be elucidated and subsequently used to estimate surface orientation. The channel filters used are Gaborfunctions, which have previously been applied successfully to related problems in texture analysis, segmentation, and characterization. The Gabor functions are plausible approximations to the responses of the highly oriented simple cells in mammalian striate cortex. They also possess important properties for the local isolation and characterization of textures. In our approach, texture gradients are modeled as giving rise to pattern frequency gradients that can be exiracted on a highly localized basis. A variational optimization procedure for estimating the pattern frequency variation is implemented via a discrete relaxation procedure that is suitable for a massively parallel computation. The result of the optimization procedure is a stable dense map describing the localized image frequency content. The computed image frequency characteristics are then used to define a texture density measure used in a planar-surface approximation procedure, yielding slant/tilt estimates describing the surface orientation. Experimental results support the theoretical derivations.

2 citations

Proceedings ArticleDOI
16 Jun 2011
TL;DR: A psychovisual study is undertaken to infer the visually lossless threshold for H.264 compression of videos spanning a wide range of contents and a compressibility index is proposed which provides a measure of the appropriate bit-rate for VL H. 264 compression.
Abstract: Although the term ‘visually lossless’ (VL) has been used liberally in the video compression literature, there does not seem to be a systematic evaluation of what it means for a video to be compressed visually lossless-ly. Here, we undertake a psychovisual study to infer the visually lossless threshold for H.264 compression of videos spanning a wide range of contents. Based on results from this study, we then propose a compressibility index which provides a measure of the appropriate bit-rate for VL H.264 compression of a video given texture (i.e., spatial activity) and motion (i.e., temporal activity) information. This compressibility index has been made available online at [1] in order to facilitate practical application of the research presented here and to further research in the area of VL compression.

2 citations

Proceedings ArticleDOI
20 Jul 2022
TL;DR: This work has created a first-of-a-kind online video quality prediction framework for live streaming, using a multi-modal learning framework with separate pathways to visual and audio quality predictions, able to provide accurate quality predictions at the patch, frame, clip, and audiovisual levels.
Abstract: Video conferencing, which includes both video and audio content, has contributed to dramatic increases in Internet traffic, as the COVID-19 pandemic forced millions of people to work and learn from home. Global Internet traffic of video conferencing has dramatically increased Because of this, efficient and accurate video quality tools are needed to monitor and perceptually optimize telepresence traffic streamed via Zoom, Webex, Meet, etc. However, existing models are limited in their prediction capabilities on multi-modal, live streaming telepresence content. Here we address the significant challenges of Telepresence Video Quality Assessment (TVQA) in several ways. First, we mitigated the dearth of subjectively labeled data by collecting ∼ 2k telepresence videos from different countries, on which we crowdsourced ∼ 80k subjective quality labels. Using this new resource, we created a first-of-a-kind online video quality prediction framework for live streaming, using a multi-modal learning framework with separate pathways to compute visual and audio quality predictions. Our all-in-one model is able to provide accurate quality predictions at the patch, frame, clip, and audiovisual levels. Our model achieves state-of-the-art performance on both existing quality databases and our new TVQA database, at a considerably lower computational expense, making it an attractive solution for mobile and embedded systems. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

2 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Book
01 Jan 1998
TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.
Abstract: Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.

17,693 citations

Proceedings ArticleDOI
21 Jul 2017
TL;DR: Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Abstract: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.

11,958 citations

Posted Content
TL;DR: Conditional Adversarial Network (CA) as discussed by the authors is a general-purpose solution to image-to-image translation problems, which can be used to synthesize photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Abstract: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.

11,127 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations