scispace - formally typeset
Search or ask a question
Author

Alan C. Bovik

Bio: Alan C. Bovik is an academic researcher from University of Texas at Austin. The author has contributed to research in topics: Image quality & Video quality. The author has an hindex of 102, co-authored 837 publications receiving 96088 citations. Previous affiliations of Alan C. Bovik include University of Illinois at Urbana–Champaign & University of Sydney.


Papers
More filters
Book ChapterDOI
01 Jan 2009
TL;DR: Some of the most exciting developments in medical imaging have arisen from new sensors that record image data from previously little used sources of radiation, or that sense radiation in new ways, as in computer-aided tomography, where X-ray data is collected from multiple angles to form a rich aggregate image.
Abstract: Publisher Summary One aspect of image processing that makes it such an interesting topic of study is the amazing diversity of applications that make use of image processing or analysis techniques. Virtually every branch of science has sub disciplines that use recording devices or sensors to collect image data from the universe. This data is often multidimensional and can be arranged in a format that is suitable for human viewing. Viewable datasets like this can be regarded as images and processed using established techniques for image processing, even if the information has not been derived from visible light sources. Another rich aspect of digital imaging is the diversity of image types that arise, and which can derive from nearly every type of radiation. Indeed, some of the most exciting developments in medical imaging have arisen from new sensors that record image data from previously little used sources of radiation, such as positron emission tomography and magnetic resonance imaging, or that sense radiation in new ways, as in computer-aided tomography, where X-ray data is collected from multiple angles to form a rich aggregate image.

7 citations

Patent
03 Jun 2013
TL;DR: In this article, a method, system and computer program product for improving the perceptual quality and naturalness of an image captured by an image acquisition device (e.g., digital camera).
Abstract: A method, system and computer program product for improving the perceptual quality and naturalness of an image captured by an image acquisition device (e.g., digital camera). Statistical features of a scene being imaged by the image acquisition device are derived from models of natural images. These statistical features are measured and mapped onto the control parameters (e.g., exposure, ISO) of the digital acquisition device. By mapping these statistical features onto the control parameters, the perceptual quality and naturalness of the scene being imaged may be based on the values of these control parameters. As a result, these control parameters are modified to maximize the perceptual quality and naturalness of the scene being imaged. After modification of these control parameters, the image is captured by the image acquisition device. In this manner, the perceptual quality and naturalness of the image captured by the image acquisition device is improved.

7 citations

Proceedings ArticleDOI
03 Nov 2002
TL;DR: This paper implements a turbo encoder and SOVA-based turbo decoder in real-time software on a TMS320C6700 digital signal processor (DSP) and presents dataflow modeling for a turbo channel coding subsystem.
Abstract: Turbo codes are used for error protection, especially in wireless systems. A turbo encoder consists of two recursive systematic convolutional component encoders connected in parallel and separated by a random interleaver. A turbo decoder, which is iterative, is typically based on either a soft output Viterbi algorithm (SOVA) or a maximum a posteri (MAP) algorithm. MAP is roughly three times more computationally complex than SOVA, but provides 0.5 dB of coding gain. We implement a turbo encoder and SOVA-based turbo decoder in real-time software on a TMS320C6700 digital signal processor (DSP). The contributions of this paper are: (1) first publicly available implementation of a SOVA-based turbo decoder on a C6000 DSP (2) speedup of 162x for the encoder on a C6200 DSP and 11.7x for the decoder on a C6700 DSP over level three C compiler optimization, and (3) dataflow modeling for a turbo channel coding subsystem.

7 citations

Proceedings ArticleDOI
01 Apr 2018
TL;DR: This work makes the first attempt to use bivariate NSS features to build a model of no-reference image quality prediction, and shows that the bivariate model outperforms existing state of the art image quality predictors.
Abstract: The univariate statistics of bandpass-filtered images provide powerful features that drive many successful image quality assessment (IQA) algorithms. Bivariate Natural Scene Statistics (NSS), which model the joint statistics of multiple bandpass image samples also provide potentially powerful features to assess the perceptual quality of images, by capturing both image and distortion correlations. Here, we make the first attempt to use bivariate NSS features to build a model of no-reference image quality prediction. We show that our bivariate model outperforms existing state of the art image quality predictors.

7 citations

Journal ArticleDOI
TL;DR: A definition for multidimensional instantaneous bandwidth is presented and used to develop criteria for determining the multicomponent nature of a signal and it is demonstrated by testing the validity of a multicomponents interpretation for a complicated nonstationary texture image.
Abstract: In this brief paper, we extend the notion of multicomponent signal into multiple dimensions. A definition for multidimensional instantaneous bandwidth is presented and used to develop criteria for determining the multicomponent nature of a signal. We demonstrate application of the criteria by testing the validity of a multicomponent interpretation for a complicated nonstationary texture image.

7 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Book
01 Jan 1998
TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.
Abstract: Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.

17,693 citations

Proceedings ArticleDOI
21 Jul 2017
TL;DR: Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Abstract: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.

11,958 citations

Posted Content
TL;DR: Conditional Adversarial Network (CA) as discussed by the authors is a general-purpose solution to image-to-image translation problems, which can be used to synthesize photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Abstract: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.

11,127 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations