scispace - formally typeset
Search or ask a question
Author

Alan C. Bovik

Bio: Alan C. Bovik is an academic researcher from University of Texas at Austin. The author has contributed to research in topics: Image quality & Video quality. The author has an hindex of 102, co-authored 837 publications receiving 96088 citations. Previous affiliations of Alan C. Bovik include University of Illinois at Urbana–Champaign & University of Sydney.


Papers
More filters
Journal ArticleDOI
TL;DR: A new feature map, called the percentage of un-linked pixels (PUP), is developed that is descriptive of the presence of disparity, and which can be used to accurately predict experienced 3D visual discomfort without the need for actually calculating disparity values.
Abstract: Almost all existing 3D visual discomfort prediction models are based, at least in part, on features that are extracted from computed disparity maps. These include such estimated quantities such as the maximum disparity, disparity range, disparity energy and other measures of the disparity distribution. A common first step when implementing a 3D visual discomfort model is some form of disparity calculation, whence the accuracy of prediction largely depends on the accuracy of the disparity result. Unfortunately, most algorithms that compute disparity maps are expensive, and are not guaranteed to deliver sufficiently accurate or perceptually relevant disparity data. This raises the question of whether it is possible to build a 3D discomfort prediction model without explicit disparity calculation. Towards this possibility, we have developed a new feature map, called the percentage of un-linked pixels (PUP), that is descriptive of the presence of disparity, and which can be used to accurately predict experienced 3D visual discomfort without the need for actually calculating disparity values. Instead, PUP features are extracted by predicting the percentage of un-linked pixels in corresponding retinal patches of image pairs. The un-linked pixels are determined by feature classification on orientation and luminance distributions. Calculation of PUP maps is much faster than traditional disparity computation, and the experimental results demonstrate that the predictive power attained using the PUP map is highly competitive with prior models that rely on computed disparity maps. HighlightsA first of a kind 3D discomfort model without disparity calculation is proposed.A new feature map, the percentage of un-linked pixels is developed in the model.PUP map is superior to prior models that rely on computed disparity maps.

21 citations

Proceedings ArticleDOI
03 Dec 2010
TL;DR: A novel two stage framework for distortion-independent blind image quality assessment based on natural scene statistics (NSS) is proposed, which can be extended beyond the distortion-pool considered here, and each module proposed can be replaced by better-performing ones in the future.
Abstract: Most present day no-reference/blind image quality assessment (NR IQA) algorithms are distortion specific - i.e., they assume that the distortion affecting the image is known. Here we propose a novel two stage framework for distortion-independent blind image quality assessment based on natural scene statistics (NSS). The proposed framework is modular in that it can be extended beyond the distortion-pool considered here, and each module proposed can be replaced by better-performing ones in the future. We describe a 4-distortion demonstration of the proposed framework and show that it performs competitively with the full-reference peak-signal-to-noise-ratio on the LIVE IQA database. A software release of the proposed index has been made available online: http://live.ece.utexas.edu/research/quality/BIQI_4D_release.zip.

21 citations

Journal ArticleDOI
TL;DR: A new feature normalization method is introduced for M-FISH images that reduces the difference in the feature distributions among different images using the expectation maximization (EM) algorithm and is as accurate as the maximum-likelihood classifier, whose accuracy also significantly improved after the EM normalization.
Abstract: Multicolor fluorescence in situ hybridization (M-FISH) techniques provide color karyotyping that allows simultaneous analysis of numerical and structural abnormalities of whole human chromosomes. Chromosomes are stained combinatorially in M-FISH. By analyzing the intensity combinations of each pixel, all chromosome pixels in an image are classified. Often, the intensity distributions between different images are found to be considerably different and the difference becomes the source of misclassifications of the pixels. Improved pixel classification accuracy is the most important task to ensure the success of the M-FISH technique. In this paper, we introduce a new feature normalization method for M-FISH images that reduces the difference in the feature distributions among different images using the expectation maximization (EM) algorithm. We also introduce a new unsupervised, nonparametric classification method for M-FISH images. The performance of the classifier is as accurate as the maximum-likelihood classifier, whose accuracy also significantly improved after the EM normalization. We would expect that any classifier will likely produce an improved classification accuracy following the EM normalization. Since the developed classification method does not require training data, it is highly convenient when ground truth does not exist. A significant improvement was achieved on the pixel classification accuracy after the new feature normalization. Indeed, the overall pixel classification accuracy improved by 20% after EM normalization.

21 citations

Journal ArticleDOI
TL;DR: A novel variant of the classification image paradigm that allows us to rapidly reveal strategies used by observers in visual search tasks is proposed and a new classification taxonomy is introduced that distinguishes between foveal and peripheral processes.
Abstract: We propose a novel variant of the classification image paradigm that allows us to rapidly reveal strategies used by observers in visual search tasks. We make use of eye tracking, 1/f noise, and a grid-like stimulus ensemble and also introduce a new classification taxonomy that distinguishes between foveal and peripheral processes. We tested our method for 3 human observers and two simple shapes used as search targets. The classification images obtained show the efficacy of the proposed method by revealing the features used by the observers in as few as 200 trials. Using two control experiments, we evaluated the use of naturalistic 1/f noise with classification images, in comparison with the more commonly used white noise, and compared the performance of our technique with that of an earlier approach without a stimulus grid.

21 citations

Patent
30 Apr 2009
TL;DR: In this paper, a method and apparatus detects one or more spiculated masses in an image using a processor and an enhanced image is created by combining an output from all of the filtering steps.
Abstract: A method and apparatus detects one or more spiculated masses in an image using a processor. The image is received in the processor. The received image is filtered using one or more Gaussian filters to detect one or more central mass regions. The received image is also filtered using one or more spiculated lesion filters to detect where the one or more spiculated masses converge. In addition, the received image is filtered using one or more Difference-of-Gaussian filters to suppress one or more linear structures. An enhanced image showing the detected spiculated masses is created by combining an output from all of the filtering steps. The enhanced image is then provided to an output of the processor.

21 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Book
01 Jan 1998
TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.
Abstract: Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.

17,693 citations

Proceedings ArticleDOI
21 Jul 2017
TL;DR: Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Abstract: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.

11,958 citations

Posted Content
TL;DR: Conditional Adversarial Network (CA) as discussed by the authors is a general-purpose solution to image-to-image translation problems, which can be used to synthesize photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Abstract: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.

11,127 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations