scispace - formally typeset
Search or ask a question
Author

Alan C. Bovik

Bio: Alan C. Bovik is an academic researcher from University of Texas at Austin. The author has contributed to research in topics: Image quality & Video quality. The author has an hindex of 102, co-authored 837 publications receiving 96088 citations. Previous affiliations of Alan C. Bovik include University of Illinois at Urbana–Champaign & University of Sydney.


Papers
More filters
Proceedings ArticleDOI
06 Oct 2018
TL;DR: A probabilistic quality representation (PQR) is proposed and employs a more robust loss function to train deep BIQA models and is shown to speed up the convergence of deep model training, but also greatly improve the quality prediction accuracy relative to scalar quality score regression methods under the same setting.
Abstract: Most existing blind image quality assessment (BIQA) methods learn a regression model to predict scalar quality scores. Such a scheme ignores the fact that an image will receive divergent subjective scores from different subjects, which cannot be adequately represented by a single scalar number. This is particularly true on complex, real-world distorted images. However, the more informative score distributions are unavailable in existing image quality assessment (IQA) databases and can be potentially noisy when limited number of opinions are collected on each image. This paper proposes a probabilistic quality representation (PQR) and employs a more robust loss function to train deep BIQA models. Using a very straightforward implementation, the proposed method is shown to not only speed up the convergence of deep model training, but also greatly improve the quality prediction accuracy relative to scalar quality score regression methods under the same setting. The source code is available at https://github.com/HuiZeng/BIQA_Toolbox.

59 citations

Journal ArticleDOI
TL;DR: The authors have invented a new class of linear filters, spiculated lesion filters, for the detection of converging lines or spiculations, and invented a novel technique to enhance spicules on mammograms.
Abstract: The detection of lesions on mammography is a repetitive and fatiguing task. Thus, computer-aided detection systems have been developed to aid radiologists. The detection accuracy of current systems is much higher for clusters of microcalcifications than for spiculated masses. In this article, the authors present a new model-based framework for the detection of spiculated masses. The authors have invented a new class of linear filters, spiculated lesion filters, for the detection of converging lines or spiculations. These filters are highly specific narrowband filters, which are designed to match the expected structures of spiculated masses. As a part of this algorithm, the authors have also invented a novel technique to enhance spicules on mammograms. This entails filtering in the radon domain. They have also developed models to reduce the false positives due to normal linear structures. A key contribution of this work is that the parameters of the detection algorithm are based on measurements of physical properties of spiculated masses. The results of the detection algorithm are presented in the form of free-response receiver operating characteristic curves on images from the Mammographic Image Analysis Society and Digital Database for Screening Mammography databases.

58 citations

Journal ArticleDOI
TL;DR: A theoretical framework in which the existence of locally monotonic regression is proved and algorithms for their computation are given.
Abstract: The concept of local monotonicity appears in the study of the set of root signals of the median filter and provides a measure of the smoothness of the signal. The median filter is a suboptimal smoother under this measure of smoothness, since a filter pass does necessarily yield a locally monotonic output; even if a locally monotonic output does result, there is no guarantee that it will possess other desirable properties such as optimal similarity to the original signal. Locally monotonic regression is a technique for the optimal smoothing of finite-length discrete real signals under such a criterion. A theoretical framework in which the existence of locally monotonic regression is proved and algorithms for their computation are given. Regression is considered as an approximation problem in R/sub n/, the criterion of approximation is derived from a semimetric, and the approximating set is the collection of signals sharing the property of being locally monotonic. >

58 citations

Journal ArticleDOI
TL;DR: A QoE evaluator that accounts for interactions between stalling events, analyzes the spatial and temporal content of a video, predicts the perceptual video quality, models the state of the client-side data buffer, and consequently predicts continuous-time quality scores that agree quite well with human opinion scores is created.
Abstract: Over-the-top adaptive video streaming services are frequently impacted by fluctuating network conditions that can lead to rebuffering events (stalling events) and sudden bitrate changes. These events visually impact video consumers' quality of experience (QoE) and can lead to consumer churn. The development of models that can accurately predict viewers' instantaneous subjective QoE under such volatile network conditions could potentially enable the more efficient design of quality-control protocols for media-driven services, such as YouTube, Amazon, Netflix, and so on. However, most existing models only predict a single overall QoE score on a given video and are based on simple global video features, without accounting for relevant aspects of human perception and behavior. We have created a QoE evaluator, called the time-varying QoE Indexer, that accounts for interactions between stalling events, analyzes the spatial and temporal content of a video, predicts the perceptual video quality, models the state of the client-side data buffer, and consequently predicts continuous-time quality scores that agree quite well with human opinion scores. The new QoE predictor also embeds the impact of relevant human cognitive factors, such as memory and recency, and their complex interactions with the video content being viewed. We evaluated the proposed model on three different video databases and attained standout QoE prediction performance.

57 citations

Proceedings ArticleDOI
17 Jun 2007
TL;DR: It is shown that incorporating domain specific knowledge about the structural diversity of human faces significantly improves the performance of 3D human face recognition algorithms.
Abstract: We present a systematic procedure for selecting facial fiducial points associated with diverse structural characteristics of a human face. We identify such characteristics from the existing literature on anthropometric facial proportions. We also present three dimensional (3D) face recognition algorithms, which employ Euclidean/geodesic distances between these anthropometric fiducial points as features along with linear discriminant analysis classifiers. Furthermore, we show that in our algorithms, when anthropometric distances are replaced by distances between arbitrary regularly spaced facial points, their performances decrease substantially. This demonstrates that incorporating domain specific knowledge about the structural diversity of human faces significantly improves the performance of 3D human face recognition algorithms.

57 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Book
01 Jan 1998
TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.
Abstract: Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.

17,693 citations

Proceedings ArticleDOI
21 Jul 2017
TL;DR: Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Abstract: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.

11,958 citations

Posted Content
TL;DR: Conditional Adversarial Network (CA) as discussed by the authors is a general-purpose solution to image-to-image translation problems, which can be used to synthesize photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Abstract: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.

11,127 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations