scispace - formally typeset
Search or ask a question

Showing papers by "Alan C. Bovik published in 1999"


Proceedings ArticleDOI
15 Mar 1999
TL;DR: It is shown that the AM-FM image representation can identify normal repetitive structures and sarcomeres, with a good degree of accuracy, and detect abnormalities in sarcomere ultrastructural pattern which alter the normal regular pattern as seen in muscle pathology.
Abstract: We segment the structural units of electron microscope muscle images using a novel AM-FM image representation. This novel AM-FM approach is shown to be effective in describing sarcomeres and mitochondrial regions of the electron microscope muscle images.

35 citations


Journal ArticleDOI
TL;DR: The overall approach provides a new viewpoint on the restoration problem through the use of new image models that capture salient image features that are not well represented through traditional approaches.
Abstract: We describe two broad classes of useful and physically meaningful image models that can be used to construct novel smoothing constraints for use in the regularized image restoration problem. The two classes, termed piecewise image models (PIMs) and focal image models (LIMs), respectively, capture unique image properties that can be adapted to the image and that reflect structurally significant surface characteristics. Members of the PIM and LIM classes are easily formed into regularization operators that replace differential-type constraints. We also develop an adaptive strategy for selecting the best PIM or LIM for a given problem (from among the defined class), and we explain the construction of the corresponding regularization operators. Considerable attention is also given to determining the regularization parameter via a cross-validation technique, and also to the selection of an optimization strategy for solving the problem. Several results are provided that illustrate the processes of model selection, parameter selection, and image restoration. The overall approach provides a new viewpoint on the restoration problem through the use of new image models that capture salient image features that are not well represented through traditional approaches.

31 citations


Proceedings ArticleDOI
24 Oct 1999
TL;DR: A prototype for foveated visual communications as one of future human interactive multimedia applications is introduced, and the benefit of the foveation over fading statistics in the downtown area of Austin, Texas is demonstrated.
Abstract: The great potential of "foveated imaging" lies in the entropy reduction relative to the original image while minimizing the loss of visual information. Utilizing human foveation combined with video compression, as well as communication and human-machine interface techniques, more efficient multimedia services are expected to be provided in the near future. In this paper, we introduce a prototype for foveated visual communications as one of future human interactive multimedia applications, and demonstrate the benefit of the foveation over fading statistics in the downtown area of Austin, Texas. In order to compare the performance with regular video, we use spatial/temporal resolution and source transmission delay as the evaluation criteria.

15 citations


Journal ArticleDOI
TL;DR: This work investigates one possible thread in detail of transform-domain energy compaction of broadband signals in lossy compression and demonstrates some interesting broadband image compression results.
Abstract: Compaction by optimal permutation (COPERM) is a tool for transform domain energy compaction of broadband signals, whose foundation is a simple but powerful idea: any signal can be transformed to resemble a more desirable (e.g., from a transform-domain compaction viewpoint) signal from a class of "target" signals (e.g., DCT basis functions) by means of a suitable permutation of its samples. One application of transform-domain energy compaction is in lossy compression. We pursue one possible thread in detail and demonstrate some interesting broadband image compression results.

14 citations


Proceedings ArticleDOI
15 Mar 1999
TL;DR: This paper develops several rate control algorithms, and measures the performance of foveated video, utilizing H.263 video, and compares the performance with regular video based on the SNRC (signal-to-noise-ratio in curvilinear coordinates).
Abstract: Recently, foveated video has been introduced as an important emerging method for very low bit rate multimedia applications. In this paper, we develop several rate control algorithms, and measure the performance of foveated video. We utilize H.263 video, and compare the performance with regular video based on the SNRC (signal-to-noise-ratio in curvilinear coordinates). In order to maximize compression, we use a maximum quantization parameter (QP=31) for the regular video, and code a foveated video sequence at the equivalent bit rate. In simulation, we improve the PSNRC to 3.64 (1.62)dB under 30 (14) kbits/sec for P pictures in CIF "News" ("Akiyo") standard video sequence.

7 citations


Journal ArticleDOI
TL;DR: A binocular stereo system for images coded by Visual Pattern Image Coding (VPIC) is presented and evaluated and an algorithm for spatial matching of VPIC primitives is proposed.

5 citations


Proceedings ArticleDOI
24 Oct 1999
TL;DR: This paper presents a motion estimation and compensation algorithm for foveated video and measures the performance using a new measure of visual fidelity termed foveal mean absolute distortion, achieved by subsampling searching area dependent on the local bandwidth in the sense of Nyquist sampling criterion.
Abstract: Utilizing the nonuniform resolution property of the human visual system, foveated video can provide high visual quality relative to non-foveated video by allocating more bits to the central foveation area. In this paper, we present a motion estimation and compensation algorithm for foveated video and measure the performance using a new measure of visual fidelity termed foveal mean absolute distortion. The computation redundancy reduction is achieved by subsampling searching area dependent on the local bandwidth in the sense of Nyquist sampling criterion. In addition, we reduce motion compensated errors by increasing temporal correlation when single or multiple foveation points are added or subtracted.

4 citations


Proceedings ArticleDOI
24 Oct 1999
TL;DR: A nonlinear algorithm that uses the phase shift between two successive scans of interference fringe data to give a high-resolution estimate of the Doppler shift resolution and is well-suited for real-time implementation in software.
Abstract: Optical Doppler Tomography (ODT) is a noninvasive 3-D optical interferometric imaging technique that measures static and dynamic structures in a sample. To obtain the dynamic structure, e.g. blood flowing in tissue, a velocity estimation algorithm detects the Doppler shift in the received interference fringe data with respect to the carrier frequency. Previous velocity estimation algorithms use conventional Fourier magnitude techniques that do not provide sufficient frequency resolution in fast ODT systems because of the high data acquisition rates and hence short time series. In this paper, we propose a nonlinear algorithm that uses the phase shift between two successive scans of interference fringe data to give a high-resolution estimate of the Doppler shift. The algorithm detects Doppler shifts of 0.1 to 3 kHz with respect to a 1 MHz carrier. In processing 5 frames/s with 100×100 pixels/frame and 32 samples/pixel, i.e. 1.6 million samples/s, the algorithm requires 26 million multiply-accumulates/s. The algorithm works well at 4 bits/sample. The low complexity and small input data size are well-suited for real-time implementation in software. We provide a mathematical analysis of the Doppler shift resolution by modeling the interference fringe data as an AM-FM signal.

4 citations


Proceedings ArticleDOI
24 Oct 1999
TL;DR: The proposed algorithms yield halftones of high fidelity at a low computational cost and derive optimal formulas for the sharpness control parameter to make the overall frequency response flat.
Abstract: We present raster image processing algorithms for rehalftoning error diffused halftones and producing interpolated error diffused halftones. Rehalftoning converts a halftone created by one method into one created by another method. In interpolated halftoning, interpolation increases the image size before halftoning, e.g. for printing. Both rehalftoning and interpolated halftoning introduce blur and noise in the output image. To compensate for the blur, we use modified error diffusion, which has a variable gain parameter to control the sharpness. We derive optimal formulas for the sharpness control parameter to make the overall frequency response flat. The high-frequency noise is masked by error diffusion. The proposed algorithms yield halftones of high fidelity at a low computational cost.

1 citations