scispace - formally typeset
A

Alan C. Bovik

Researcher at University of Texas at Austin

Publications -  872
Citations -  120104

Alan C. Bovik is an academic researcher from University of Texas at Austin. The author has contributed to research in topics: Image quality & Video quality. The author has an hindex of 102, co-authored 837 publications receiving 96088 citations. Previous affiliations of Alan C. Bovik include University of Illinois at Urbana–Champaign & University of Sydney.

Papers
More filters

D5a.3 RELATING ANALOG AND DIGITAL ORDER STATISTIC FILTERS

TL;DR: In this article, the authors extended the concept of analog rank-order and order sfatistic (OS) filters to analog and digital signals and developed a formalism for relating the analog and continuous-time OS filters.

Quality Assessment of Mobile Videos with In-Capture Distortions

TL;DR: In this paper, the LIVE Mobile In-capture Video Quality Database (LIVEVMQDB) was created, which consists of 208 videos that were captured using eight different smart-phones and were affected by six common incapture distortions.
Proceedings ArticleDOI

Blind Video Quality Assessment via Space-Time Slice Statistics

Abstract: User-generated contents (UGC) have gained increased attention in the video quality community recently. Perceptual video quality assessment (VQA) of UGC videos is of great significance for content providers to monitor, process, and deliver massive numbers of UGC videos. Blind video quality prediction of UGC videos is challenging since complex mixtures of spatial and temporal distortions contribute to the overall perceptual quality. In this paper, we develop a simple, effective, and efficient blind VQA framework (STS-QA) based on the statistical analysis of space-time slices (STS) of videos. Specifically, we extract spatio-temporal statistical features along different orientations of video STS, that capture directional global motion, then train a shallow quality predictor. The proposed framework can be used to easily extend any existing video/image quality model to account for temporal or motion regularities. Our experimental results on three publicly available UGC databases demonstrate that our proposed STS-QA model can significantly boost prediction performance compared to baselines. The code will be released at: https://github.com/uniqzheng/STS_BVQA.
Journal ArticleDOI

Low-level fixation search in natural scenes by optimal extraction of texture-contrast information

TL;DR: The beginnings of a low-level theory of visual fixations in natural scenes are constructed by the formulation and verification a Barlow-type hypothesis for fixation selection—where the fixation patterns are designed to maximally contrast and textural information.