scispace - formally typeset
Search or ask a question
Author

Alan C. Bovik

Bio: Alan C. Bovik is an academic researcher from University of Texas at Austin. The author has contributed to research in topics: Image quality & Video quality. The author has an hindex of 102, co-authored 837 publications receiving 96088 citations. Previous affiliations of Alan C. Bovik include University of Illinois at Urbana–Champaign & University of Sydney.


Papers
More filters
Journal ArticleDOI
TL;DR: A novel UGC gaming video resource is created, called the LIVEYouTube Gaming video quality (LIVE-YT-Gaming) database, comprised of 600 real UGCGaming videos, and a subjective human study is conducted on this data, yielding 18,600 human quality ratings recorded by 61 human subjects.
Abstract: The rising popularity of online User-Generated-Content (UGC) in the form of streamed and shared videos, has hastened the development of perceptual Video Quality Assessment (VQA) models, which can be used to help optimize their delivery. Gaming videos, which are a relatively new type of UGC videos, are created when skilled gamers post videos of their gameplay. These kinds of screenshots of UGC gameplay videos have become extremely popular on major streaming platforms like YouTube and Twitch. Synthetically-generated gaming content presents challenges to existing VQA algorithms, including those based on natural scene/video statistics models. Synthetically generated gaming content presents different statistical behavior than naturalistic videos. A number of studies have been directed towards understanding the perceptual characteristics of professionally generated gaming videos arising in gaming video streaming, online gaming, and cloud gaming. However, little work has been done on understanding the quality of UGC gaming videos, and how it can be characterized and predicted. Towards boosting the progress of gaming video VQA model development, we conducted a comprehensive study of subjective and objective VQA models on UGC gaming videos. To do this, we created a novel UGC gaming video resource, called the LIVE-YouTube Gaming video quality (LIVE-YT-Gaming) database, comprised of 600 real UGC gaming videos. We conducted a subjective human study on this data, yielding 18,600 human quality ratings recorded by 61 human subjects. We also evaluated a number of state-of-the-art (SOTA) VQA models on the new database, including a new one, called GAME-VQP, based on both natural video statistics and CNN-learned features. To help support work in this field, we are making the new LIVE-YT-Gaming Database, publicly available through the link: https://live.ece.utexas.edu/research/LIVE-YT-Gaming/index.html .

2 citations

Journal ArticleDOI
TL;DR: A pixel-based no-reference video quality assessment method is proposed that addresses the described challenges and achieves good correlations against subjective scores of users of underwater videos.
Abstract: Underwater imagery is increasingly drawing attention from the scientific community, since pictures and videos are invaluable tools in the study of the vast unknown oceanic environment that covers 90% of the planetary biosphere. However, underwater sensor networks must cope with the harsh channel that seawater constitutes. Medium range communication is only possible using acoustic modems that have limited transmission capabilities and peak bitrates of only a few dozens of kilobits per second. These reduced bitrates force heavy compression on videos, yielding much higher levels of distortion than in other video services. Furthermore, underwater video users are ocean researchers, and therefore their quality perception is also different from the generic viewers that typically take part in subjective quality assessment experiments. Computational efficiency is also important since the underwater nodes must run on batteries and their recovery is very expensive. In this paper, we propose a pixel-based no-reference video quality assessment method that addresses the described challenges and achieves good correlations against subjective scores of users of underwater videos.

2 citations

Proceedings Article
20 Aug 1989
TL;DR: This work measured the minimum disparity needed to produce a reversal in apparent depth in ambiguous chromatic "wallpaper" stereograms and demonstrated that chromatic information can gready reduce the matching ambiguity, while significantly increasing both matching accuracy and algorithm speed.
Abstract: One approach to developing faster, more robust stereo algorithms is to seek a more complete and efficient use of information available in stereo images. The use of chromatic (color) information has been largely neglected in this regard. Motivations for using chromatic information are discussed, including strong evidence for the use of chromatic information in the human stereo correspondence process in the form of a novel psychophysical experiment which we have performed. Specifically, we measured the minimum disparity needed to produce a reversal in apparent depth in ambiguous chromatic "wallpaper" stereograms. Our results indicate that chromatic information plays an important role in the stereo correspondence process when luminances variations are present. To investigate the potential role of chromatic information in computational stereo algorithms, a novel chromatic matching constraint -- the chromatic gradient matching constraint -- is presented. Then, a thorough analysis of the utility of this constraint in the PMF Algorithm is performed for a large range of sizes of the matching strength support neighborhood, and the performances of the algorithm with and without these constraints are directly compared in terms of disambiguation ability, matching accuracy and algorithm speed. The results demonstrate that chromatic information can gready reduce the matching ambiguity, while significantly increasing both matching accuracy and algorithm speed.

2 citations

Proceedings ArticleDOI
12 Nov 2007
TL;DR: A foundation of theorems is derived that provide a means for obtaining optimal sampling schemes for a given set of epipolar spaces and is defined as a strategy that minimizes the average area per epipolar space.
Abstract: If precise calibration information is unavailable, as is often the case for active binocular vision systems, the determination of epipolar lines becomes untenable. Yet, even without instantaneous knowledge of the geometry, the search for corresponding points can be restricted to areas called epipolar spaces. For each point in one image, we define the corresponding epipolar space in the other image as the union of all associated epipolar lines over all possible system geometries. Epipolar spaces eliminate the need for calibration at the cost of an increased search region. One approach to mitigate this increase is the application of a space variant sampling or foveation strategy. While the application of such strategies to stereo vision tasks is not new, only rarely has a foveation scheme been specifically tailored for a stereo vision task. In this paper we derive a foundation of theorems that provide a means for obtaining optimal sampling schemes for a given set of epipolar spaces. An optimal sampling scheme is defined as a strategy that minimizes the average area per epipolar space.

2 citations

Proceedings ArticleDOI
18 Mar 2005
TL;DR: An error-resilient image communications application that uses the GSM model and multiple description coding (MDC) to provide error- Resilience, and derives a rate-distortion bound for GSM random variables, derive the redundancy rate- Distortion function, and finally implement an MD image communication system.
Abstract: The statistics of natural scenes in the wavelet domain are accurately characterized by the Gaussian scale mixture (GSM) model. The model lends itself easily to analysis and many applications that use this model are emerging (e.g., denoising, watermark detection). We present an error-resilient image communications application that uses the GSM model and multiple description coding (MDC) to provide error-resilience. We derive a rate-distortion bound for GSM random variables, derive the redundancy rate-distortion function, and finally implement an MD image communication system.

2 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Book
01 Jan 1998
TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.
Abstract: Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.

17,693 citations

Proceedings ArticleDOI
21 Jul 2017
TL;DR: Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Abstract: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.

11,958 citations

Posted Content
TL;DR: Conditional Adversarial Network (CA) as discussed by the authors is a general-purpose solution to image-to-image translation problems, which can be used to synthesize photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Abstract: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.

11,127 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations