scispace - formally typeset
Search or ask a question
Author

Alan C. Bovik

Bio: Alan C. Bovik is an academic researcher from University of Texas at Austin. The author has contributed to research in topics: Image quality & Video quality. The author has an hindex of 102, co-authored 837 publications receiving 96088 citations. Previous affiliations of Alan C. Bovik include University of Illinois at Urbana–Champaign & University of Sydney.


Papers
More filters
Journal ArticleDOI
TL;DR: The proposed debanding filter is able to adaptively smooth banded regions while preserving image edges and details, yielding perceptually enhanced gradient rendering with limited bit-depths.
Abstract: Banding artifacts, which manifest as staircase-like color bands on pictures or video frames, is a common distortion caused by compression of low-textured smooth regions. These false contours can be very noticeable even on high-quality videos, especially when displayed on high-definition screens. Yet, relatively little attention has been applied to this problem. Here we consider banding artifact removal as a visual enhancement problem, and accordingly, we solve it by applying a form of content-adaptive smoothing filtering followed by dithered quantization, as a post-processing module. The proposed debanding filter is able to adaptively smooth banded regions while preserving image edges and details, yielding perceptually enhanced gradient rendering with limited bit-depths. Experimental results show that our proposed debanding filter outperforms state-of-the-art false contour removing algorithms both visually and quantitatively.

12 citations

Book ChapterDOI
TL;DR: Techniques for describing and processing texture as a constrained optimization problem are outlined and the recently developed SAWTA neural network for texture-based segmentation is presented.
Abstract: We review key conventional and neural network techniques for processing of textured images, and highlight the relationships among different methodologies and schemes. Texture, which provides useful information for segmentation of scenes, classification of surface materials and computation of shape, is exploited by sophisticated biological vision systems for image analysis. A brief overview of biological visual processing provides the setting for this study of textured image processing. We explain the use of multiple Gabor filters for segmentation of textured images based on a locally quasimonochromatic image texture model. This approach is compared to the functioning of localized neuronal receptive fields. Cooperative neural processes for perceptual grouping and emergent segmentation are reviewed next, and related to relaxation labelling. The recently developed SAWTA neural network for texture-based segmentation is then presented. Finally, techniques for describing and processing texture as a constrained optimization problem are outlined.

12 citations

Proceedings ArticleDOI
01 Dec 1985
TL;DR: In this article, an optimal statistical procedure for detecting object boundaries in speckle noise imagery is formulated for synthetic aperture radar (SAR) images, where the ratio of local neighborhood averages is thresholded and a nonparametric procedure based on a linear rank statistic is also described.
Abstract: Optimal statistical procedures are formulated for detecting object boundaries in speckle noise imagery. Although speckle noise is found in many applications, our principal interest is in synthetic aperture radar (SAR). The first procedure described is parametric. The ratio of local neighborhood averages is thresholded, which is a statistically natural approach since the noise is often modeled as multiplicative. A nonparametric procedure based on a linear rank statistic is also described, which can be shown to be locally most powerful (among rank tests). Examples are given, and comparisons are offered. The parametric scheme performed slightly better than the rank-order method in the examples, but the inherent robustness of the latter may recommend it for the practical application.

12 citations

Journal ArticleDOI
TL;DR: This work proposes a Bayesian model of monocular depth computation that recovers detailed 3D scene structures by extracting reliable, robust, depth-sensitive statistical features from single natural images using well-accepted univariate natural scene statistics models and recent bivariate/correlation NSS models.
Abstract: Estimating an accurate and naturalistic dense depth map from a single monocular photographic image is a difficult problem Nevertheless, human observers have little difficulty understanding the depth structure implied by photographs Two-dimensional (2D) images of the real-world environment contain significant statistical information regarding the three-dimensional (3D) structure of the world that the vision system likely exploits to compute perceived depth, monocularly as well as binocularly Toward understanding how this might be accomplished, we propose a Bayesian model of monocular depth computation that recovers detailed 3D scene structures by extracting reliable, robust, depth-sensitive statistical features from single natural images These features are derived using well-accepted univariate natural scene statistics (NSS) models and recent bivariate/correlation NSS models that describe the relationships between 2D photographic images and their associated depth maps This is accomplished by building a dictionary of canonical local depth patterns from which NSS features are extracted as prior information The dictionary is used to create a multivariate Gaussian mixture (MGM) likelihood model that associates local image features with depth patterns A simple Bayesian predictor is then used to form spatial depth estimates The depth results produced by the model, despite its simplicity, correlate well with ground-truth depths measured by a current-generation terrestrial light detection and ranging (LIDAR) scanner Such a strong form of statistical depth information could be used by the visual system when creating overall estimated depth maps incorporating stereopsis, accommodation, and other conditions Indeed, even in isolation, the Bayesian predictor delivers depth estimates that are competitive with state-of-the-art "computer vision" methods that utilize highly engineered image features and sophisticated machine learning algorithms

12 citations

Journal ArticleDOI
TL;DR: A cross-layer optimization-based scheduling scheme called binding optimization of duty cycling and networking through energy tracking (BUCKET) is developed, which is formulated in four-stages and displays performance enhancements of ~12-15% over those of conventional methods in terms of the average service rate.
Abstract: Renewable solar energy harvesting systems have received considerable attention as a possible substitute for conventional chemical batteries in sensor networks. However, it is difficult to optimize the use of solar energy based only on empirical power acquisition patterns in sensor networks. We apply acquisition patterns from actual solar energy harvesting systems and build a framework to maximize the utilization of solar energy in general sensor networks. To achieve this goal, we develop a cross-layer optimization-based scheduling scheme called binding optimization of duty cycling and networking through energy tracking (BUCKET), which is formulated in four-stages: 1) prediction of energy harvesting and arriving traffic; 2) internode optimization at the transport and network layers; 3) intranode optimization at the medium access control layer; and 4) flow control of generated communication task sets using a token-bucket algorithm. Monitoring of the structural health of bridges is shown to be a potential application of an energy-harvesting sensor network. The example network deploys five sensor types: 1) temperature; 2) strain gauge; 3) accelerometer; 4) pressure; and 5) humidity. In the simulations, the BUCKET algorithm displays performance enhancements of ~12-15% over those of conventional methods in terms of the average service rate.

12 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Book
01 Jan 1998
TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.
Abstract: Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.

17,693 citations

Proceedings ArticleDOI
21 Jul 2017
TL;DR: Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Abstract: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.

11,958 citations

Posted Content
TL;DR: Conditional Adversarial Network (CA) as discussed by the authors is a general-purpose solution to image-to-image translation problems, which can be used to synthesize photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Abstract: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.

11,127 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations