scispace - formally typeset
Search or ask a question

Showing papers on "Image quality published in 1999"


Journal ArticleDOI
TL;DR: The experimental results show that the proposed image authentication technique by embedding digital "watermarks" into images successfully survives image processing operations, image cropping, and the Joint Photographic Experts Group lossy compression.
Abstract: An image authentication technique by embedding digital "watermarks" into images is proposed. Watermarking is a technique for labeling digital pictures by hiding secret information into the images. Sophisticated watermark embedding is a potential method to discourage unauthorized copying or attest the origin of the images. In our approach, we embed the watermarks with visually recognizable patterns into the images by selectively modifying the middle-frequency parts of the image. Several variations of the proposed method are addressed. The experimental results show that the proposed technique successfully survives image processing operations, image cropping, and the Joint Photographic Experts Group (JPEG) lossy compression.

892 citations


Book
01 Dec 1999
TL;DR: In this article, an extension of the Contrast Sensitivity model to the extra-foveal vision was proposed to measure the temporal domain effect of non-white spatial noise on contrast sensitivity.
Abstract: Modulation Threshold and Noise Model for the Spatial Contrast Sensitivity of the Eye Extension of the Contrast Sensitivity Model to Extra-Foveal Vision Extension of the Contrast Sensitivity Model to the Temporal Domain Effect of Nonwhite Spatial Noise on Contrast Sensitivity Contrast Discrimination Model Image Quality Measure Effect of Various Parameters on Image Quality.

735 citations


PatentDOI
David O. Walsh1
TL;DR: Experimental results indicate SNR performance approaching that of the optimal matched filter and the technique enables near‐optimal reconstruction of multicoil MR imagery without a‐priori knowledge of the individual coil field maps or noise correlation structure.
Abstract: A method to model the NMR signal and/or noise functions as stochastic processes. Locally relevant statistics for the signal and/or noise processes are derived directly from the set of individual coil images, in the form of array correlation matrices, by averaging individual coil image cross-products over two or more pixel locations. An optimal complex weight vector is computed on the basis of the estimated signal and noise correlation statistics. The weight vector is applied to coherently combine the individual coil images at a single pixel location, at multiple pixel locations, or over the entire image field of view (FOV).

721 citations


Journal ArticleDOI
Yoshinori Arai1, E Tammisalo, K Iwai, Koji Hashimoto, Koji Shinoda 
TL;DR: Ortho-CT as mentioned in this paper is a cone-beam-type of CT apparatus consisting of a multifunctional maxillofacial imaging machine (Scanora, Soredex, Helsinki, Finland) in which the film is replaced with an X-ray imaging intensifier (Hamamatsu Photonics, Hamamatsu, Japan).
Abstract: OBJECTIVE To describe a compact computed tomographic apparatus (Ortho-CT) for use in dental practice. METHODS Ortho-CT is a cone-beam-type of CT apparatus consisting of a multifunctional maxillofacial imaging machine (Scanora, Soredex, Helsinki, Finland) in which the film is replaced with an X-ray imaging intensifier (Hamamatsu Photonics, Hamamatsu, Japan). The region of image reconstruction is a cylinder 32 mm in height and 38 mm in diameter and the voxel is a 0.136-mm cube. Scanning is at 85 kV and 10 mA with a 1 mm Cu filter. The scan time is 17 s comparable with that required for rotational panoramic radiography. A single scan collects 512 sets of projection data through 360 degrees and the image is reconstructed by a personal computer. The time required for image reconstruction is about 10 min. RESULTS The resolution limit was about 2.0 lp mm-1 and the skin entrance dose 0.62 mGy. Excellent image quality was obtained with a tissue-equivalent skull phantom: roots, periodontal ligament space, lamina du...

721 citations


Journal ArticleDOI
TL;DR: An implementation of NeTra, a prototype image retrieval system that uses color texture, shape and spatial location information in segmented image database that incorporates a robust automated image segmentation algorithm that allows object or region based search.
Abstract: We present here an implementation of NeTra, a prototype image retrieval system that uses color, texture, shape and spatial location information in segmented image regions to search and retrieve similar regions from the database. A distinguishing aspect of this system is its incorporation of a robust automated image segmentation algorithm that allows object- or region-based search. Image segmentation significantly improves the quality of image retrieval when images contain multiple complex objects. Images are segmented into homogeneous regions at the time, of ingest into the database, and image attributes that represent each of these regions are computed. In addition to image segmentation, other important components of the system include an efficient color representation, and indexing of color, texture, and shape features for fast search and retrieval. This representation allows the user to compose interesting queries such as "retrieve all images that contain regions that have the color of object A, texture of object B, shape of object C, and lie in the upper of the image", where the individual objects could be regions belonging to different images. A Java-based web implementation of NeTra is available at http://vivaldi.ece.ucsb.edu/Netra.

624 citations


Proceedings ArticleDOI
TL;DR: An evaluation procedure of image watermarking systems is presented and how to efficiently evaluate the watermark performance in such a way that fair comparisons between different methods are possible is shown.
Abstract: Since the early 90s a number of papers on 'robust' digital watermarking systems have been presented but none of them uses the same robustness criteria. This is not practical at all for comparison and slows down progress in this area. To address this issue, we present an evaluation procedure of image watermarking systems. First we identify all necessary parameters for proper benchmarking and investigate how to quantitatively describe the image degradation introduced by the watermarking process. For this, we show the weaknesses of usual image quality measures in the context watermarking and propose a novel measure adapted to the human visual system. Then we show how to efficiently evaluate the watermark performance in such a way that fair comparisons between different methods are possible. The usefulness of three graphs: 'attack vs. visual-quality,' 'bit-error vs. visual quality,' and 'bit-error vs. attack' are investigated. In addition the receiver operating characteristic (ROC) graphs are reviewed and proposed to describe statistical detection behavior of watermarking methods. Finally we review a number of attacks that any system should survive to be really useful and propose a benchmark and a set of different suitable images.

591 citations


Journal ArticleDOI
Hui Hu1
TL;DR: The results show that the slice profile, image artifacts, and noise exhibit performance peaks or valleys at certain helical pitches in the multi-slice CT, whereas in the single- slice CT the image noise remains unchanged and the slices profile and image artifacts steadily deteriorate with helical pitch.
Abstract: The multi-slice CT scanner refers to a special CT system equipped with a multiple-row detector array to simultaneously collect data at different slice locations. The multi-slice CT scanner has the capability of rapidly scanning large longitudinal (z) volume with high z-axis resolution. It also presents new challenges and new characteristics. In this paper, we study the scan and reconstruction principles of the multi-slice helical CT in general and the 4-slice helical CT in particular. The multi-slice helical computed tomography consists of the following three key components: the preferred helical pitches for efficient z sampling in data collection and better artifact control; the new helical interpolation algorithms to correct for fast simultaneous patient translation; and the z-filtering reconstruction for providing multiple tradeoffs of the slice thickness, image noise and artifacts to suit for different application requirements. The concept of the preferred helical pitch is discussed with a newly proposed z sampling analysis. New helical reconstruction algorithms and z-filtering reconstruction are developed for multi-slice CT in general. Furthermore, the theoretical models of slice profile and image noise are established for multi-slice helical CT. For 4-slice helical CT in particular, preferred helical pitches are discussed. Special reconstruction algorithms are developed. Slice profiles, image noises, and artifacts of 4-slice helical CT are studied and compared with single slice helical CT. The results show that the slice profile, image artifacts, and noise exhibit performance peaks or valleys at certain helical pitches in the multi-slice CT, whereas in the single-slice CT the image noise remains unchanged and the slice profile and image artifacts steadily deteriorate with helical pitch. The study indicates that the 4-slice helical CT can provide equivalent image quality at 2 to 3 times the volume coverage speed of the single slice helical CT.

523 citations


Journal ArticleDOI
TL;DR: This imaging mode could be used in different organs with a heightening of low-contrast lesions through artefact reduction, as well as by the induced greater intrinsic contrast sensitivity of the harmonic imaging mode.
Abstract: The recent introduction of tissue harmonic imaging could resolve the problems related to ultrasound in technically difficult patients by providing a marked improvement in image quality. Tissue harmonics are generated during the transmit phase of the pulse-echo cycle, that is, while the transmitted pulse propagates through tissue. Tissue harmonic images are formed by utilizing the harmonic signals that are generated by tissue and by filtering out the fundamental echo signals that are generated by the transmitted acoustic energy. To achieve this, two processes could be used; one by using filters for fundamental and harmonic imaging and the second using two simultaneous pulses with a 180 degrees difference in phase. The introduction of harmonics allows increased penetration without a loss of detail, by obtaining a clearer image at depth with significantly less compromise to the image quality caused by the use of lower frequencies. This imaging mode could be used in different organs with a heightening of low-contrast lesions through artefact reduction, as well as by the induced greater intrinsic contrast sensitivity of the harmonic imaging mode.

428 citations


Journal ArticleDOI
TL;DR: The axial and HQ-helical modes of the multi-slice system provided excellent image quality and a substantial reduction in exam time and tube loading, although at varying degrees of increased dose relative to the single-slice scanner.
Abstract: Our purpose in this study was to characterize the performance of a recently introduced multi-slice CT scanner (LightSpeed QX/i, Version 1.0, General Electric Medical Systems) in comparison to a single-slice scanner from the same manufacturer (HiSpeed CT/i, Version 4.0). To facilitate this comparison, a refined definition of pitch is introduced which accommodates multi-slice CT systems, yet maintains the existing relationships between pitch, patient dose, and image quality. The following performance parameters were assessed: radiation and slice sensitivity profiles, low-contrast and limiting spatial resolution, image uniformity and noise, CT number and geometric accuracy, and dose. The multi-slice system was tested in axial (1, 2, or 4 images per gantry rotation) and HQ (Pitch = 0.75) and HS (Pitch = 1.5) helical modes. Axial and helical acquisition speed and limiting spatial resolution (0.8-s exposure) were improved on the multi-slice system. Slice sensitivity profiles, image noise, CT number accuracy and uniformity, and low-contrast resolution were similar. In some HS-helical modes, helical artifacts and geometric distortion were more pronounced with a different appearance. Radiation slice profiles and doses were larger on the multi-slice system at all scan widths. For a typical abdomen and pelvis exam, the central and surface body doses for 5-mm helical scans were higher on the multi-slice system by approximately 50%. The increase in surface CTDI values (with respect to the single-slice system) was greatest for the 4 x 1.25-mm detector configuration (190% for head, 240% for body) and least for the 4 x 5-mm configuration (53% for head, 76% for body). Preliminary testing of version 1.1 software demonstrated reduced doses on the multi-slice scanner, where the increase in body surface CTDI values (with respect to the single-slice system) was 105% for the 4 x 1.25-mm detector configuration and 10% for the 4 x 5-mm configuration. In summary, the axial and HQ-helical modes of the multi-slice system provided excellent image quality and a substantial reduction in exam time and tube loading, although at varying degrees of increased dose relative to the single-slice scanner.

377 citations


Journal ArticleDOI
TL;DR: In this article, a number of evaluation methods are reviewed, and it is concluded that those based on confusion matrices and the KHAT analysis are the most suited if one is interested in comparing classifiers.
Abstract: The following issues relate to quality assessment of image classification: the classification methods as such, the methods to evaluate the classification results, and the requirements of the application. In this paper, a number of evaluation methods are reviewed, and it is concluded that those based on confusion matrices and the KHAT analysis are the most suited if one is interested in comparing classifiers. The novelty of this paper is that much attention is given to the subjectivity present in every evaluation scheme, and that the concept of accuracy is extended to quality by creating the link between accuracy, objectives, and costs. A protocol is proposed for quality assessment related to the economical reality. An example based on a hypothetical data set shows that the economic cost of misclassification can be high, and that it may be advantageous for the user to reconsider either the objectives, the type of data used, or other aspects of the remote-sensing system that he uses to produce the map.

303 citations


Journal ArticleDOI
TL;DR: A variant of Tikhonov regularization is examined in which radial variation is allowed in the value of the regularization parameter, which minimizes high-frequency noise in the reconstructed image near the source-detector locations and can produce constant image resolution and contrast across the image field.
Abstract: Diffuse tomography with near-infrared light has biomedical application for imaging hemoglobin, water, lipids, cytochromes, or exogenous contrast agents and is being investigated for breast cancer diagnosis. A Newton–Raphson inversion algorithm is used for image reconstruction of tissue optical absorption and transport scattering coefficients from frequency-domain measurements of modulated phase shift and light intensity. A variant of Tikhonov regularization is examined in which radial variation is allowed in the value of the regularization parameter. This method minimizes high-frequency noise in the reconstructed image near the source–detector locations and can produce constant image resolution and contrast across the image field.

Journal ArticleDOI
TL;DR: This paper discusses issues in vision modeling for perceptual video quality assessment (PVQA), to explain how important characteristics of the human visual system may be incorporated in vision models for PVQA, to give a brief overview of the state-of-the-art and current efforts in this field, and to outline directions for future research.

Journal ArticleDOI
TL;DR: A comparison between the two methods gave a good correlation, and a regression equation of SNRsingle = 1.1 + 0.94 SNRdual indicates that the single acquisition method is appropriate for use in a quality assurance programme, since it is quicker and simpler to perform and is a good indicator of the more exact measure.
Abstract: The signal to noise ratio (SNR) is one of the important measures of the performance of a magnetic resonance imaging (MRI) system. The object of this study was to compare a single acquisition method, which estimates the noise from background pixels, with a dual acquisition method which estimates the noise from the subtraction of two sequentially acquired images. The dual acquisition method is more exact, but is slower to perform and requires image manipulation. A comparison between the two methods gave a good correlation, and a regression equation of SNRsingle = 1.1 + 0.94 SNRdual. The single acquisition method is therefore appropriate for use in a quality assurance programme, since it is quicker and simpler to perform and is a good indicator of the more exact measure.

Journal ArticleDOI
TL;DR: This technique effectively reduces the speckle noise, while preserving the resolvable details, and performs well in comparison to the multiscale thresholding technique without adaptive preprocessing and two otherSpeckle-suppression methods.
Abstract: This paper presents a novel speckle suppression method for medical B-scan ultrasonic images. An original image is first separated into two parts with an adaptive filter. These two parts are then transformed into a multiscale wavelet domain and the wavelet coefficients are processed by a soft thresholding method, which is a variation of Donoho's (1995) soft thresholding method. The processed coefficients for each part are then transformed back into the space domain. Finally, the denoised image is obtained as the sum of the two processed parts. A computer-simulated image and an in vitro B-scan image of a pig heart have been used to test the performance of this new method. This technique effectively reduces the speckle noise, while preserving the resolvable details. It performs well in comparison to the multiscale thresholding technique without adaptive preprocessing and two other speckle-suppression methods.

Journal ArticleDOI
TL;DR: The proposed deblocking filter improves both subjective and objective image quality for various image features in low bit-rate block-based video coding.
Abstract: This paper presents a method to remove blocking artifacts in low bit-rate block-based video coding. The proposed algorithm has two separate filtering modes, which are selected by pixel behavior around the block boundary. In each mode, proper one-dimensional filtering operations are performed across the block boundary along the horizontal and vertical directions, respectively. In the first mode, corresponding to flat regions, a strong filter is applied inside the block as well as on the block boundary because the flat regions are more sensitive to the human visual system (HVS) and the artifacts propagated from the previous frame due to motion compensation are distributed inside the block. In the second mode, corresponding to other regions, a sophisticated smoothing filter which is based on the frequency information around block boundaries, is used to reduce blocking artifacts adaptively without introducing undesired blur. Even though the proposed deblocking filter is quite simple, it improves both subjective and objective image quality for various image features.

Proceedings ArticleDOI
24 Oct 1999
TL;DR: The paper presents a simple yet robust measure of image quality in terms of global (camera) blur based on histogram computation of non-zero DCT coefficients, which is directly applicable to images and video frames in compressed domain and to all types of MPEG frames.
Abstract: The paper presents a simple yet robust measure of image quality in terms of global (camera) blur. It is based on histogram computation of non-zero DCT coefficients. The technique is directly applicable to images and video frames in compressed (MPEG or JPEG) domain and to all types of MPEG frames (I-, P- or B-frames). The resulting quality measure is proved to be in concordance with subjective testing and is therefore suitable for quick qualitative characterization of images and video frames.

Proceedings ArticleDOI
22 Jan 1999
TL;DR: In this article, the authors proposed the Bit-Plane Complexity Segmentation Steganography (BPCS-Steganography) method, which uses an image as the vessel data, and embeds secret information in the bit-planes of the vessel.
Abstract: Steganography is a technique to hide secret information in some other data (we call it a vessel) without leaving any apparent evidence of data alteration. All of the traditional steganographic techniques have limited information-hiding capacity. They can hide only 10% (or less) of the data amounts of the vessel. This is because the principle of those techniques was either to replace a special part of the frequency components of the vessel image, or to replace all the least significant bits of a multivalued image with the secret information. Our new steganography uses an image as the vessel data, and we embed secret information in the bit-planes of the vessel. This technique makes use of the characteristics of the human vision system whereby a human cannot perceive any shape information in a very complicated binary pattern. We can replace all of the noise-like regions in the bit-planes of the vessel image with secret data without deteriorating the image quality. We termed our steganography BPCS-Steganography, which stands for Bit-Plane Complexity Segmentation Steganography. We made an experimental system to investigate this technique in depth. The merits of BPCS-Steganography found by the experiments are as follows. 1. The information hiding capacity of a true color image is around 50%. 2. A sharpening operation on the dummy image increases the embedding capacity quite a bit. 3. Canonical Gray coded bit planes are more suitable for BPCS-Steganography than the standard binary bit planes. 4. Randomization of the secret data by a compression operation makes the embedded data more intangible. 5. Customization of a BPCS-Steganography program for each user is easy. It further protects against eavesdropping on the embedded information.

Journal ArticleDOI
TL;DR: The developments directed toward automated quantitative image analysis and semi‐automated contour detection for cardiovascular MR imaging are reviewed.
Abstract: Magnetic resonance imaging (MRI) offers several acquisition techniques for precise and highly reproducible assessment of global and regional ventricular function, flow, and perfusion at rest and under pharmacological or physical stress conditions. Recent advances in hardware and software have resulted in strong improvement of image quality and in a significant decrease in the required imaging time for each of these acquisitions. Several aspects of heart disease can be studied by combining multiple MRI techniques in a single examination. Such a comprehensive examination could replace a number of other imaging procedures, such as diagnostic X-ray angiography, echocardiography, and scintigraphy, which would be beneficial for the patient and cost effective. Despite the advances in MRI, quantitative image analysis often still relies on manual tracing of contours in the images, which is a time-consuming and tedious procedure that limits the clinical applicability of cardiovascular MRI. Reliable automated or semi-automated image analysis software would be very helpful to overcome the limitations associated with manual image processing. In this paper the developments directed toward automated quantitative image analysis and semi-automated contour detection for cardiovascular MR imaging are reviewed. J. Magn. Reson. Imaging 1999; 10:602–608. © 1999 Wiley-Liss, Inc.

Proceedings ArticleDOI
TL;DR: These objective metrics have a number of interesting properties, including utilization of spatial activity filters which emphasize long edges on the order of 10 arc min while simultaneously performing large amounts of noise suppression and simple perceptibility thresholds and spatial-temporal masking functions.
Abstract: Many organizations have focused on developing digital video quality metrics which produce results that accurately emulate subjective responses. However, to be widely applicable a metric must also work over a wide range of quality, and be useful for in-service quality monitoring. The Institute for Telecommunication Sciences (ITS) has developed spatial-temporal distortion metrics that meet all of these requirements. These objective metrics are described in detail and have a number of interesting properties, including utilization of 1) spatial activity filters which emphasize long edges on the order of 1/5 degree while simultaneously performing large amounts of noise suppression, 2) the angular direction of the spatial gradient, 3) spatial-temporal compression factors of at least 384:1 (spatial compression of at least 64:1 and temporal compression of at least 6:1, and 4) simple perceptibility thresholds and spatial-temporal masking functions. Results are presented that compare the objective metric values with mean opinion scores from a wide range of subjective data bases spanning many different scenes, systems, bit-rates, and applications.

Journal ArticleDOI
TL;DR: This paper presents a correction algorithm which can be parameterized for third and fourth generation CT geometry which requires low computational effort and allows flexible application to different body regions by simple parameter adjustments.
Abstract: X-ray photons which are scattered inside the object slice and reach the detector array increase the detected signal and produce image artifacts as "cupping" effects in large objects and dark bands between regions of high attenuation. The artifact amplitudes increase with scanned volume or slice width. Object scatter can be reduced in third generation computed tomography (CT) geometry by collimating the detector elements. However, a correction can still improve image quality. For fourth generation CT geometry, only poor anti-scatter collimation is possible and a numeric correction is necessary. This paper presents a correction algorithm which can be parameterized for third and fourth generation CT geometry. The method requires low computational effort and allows flexible application to different body regions by simple parameter adjustments. The object scatter intensity which is subtracted from the measured signal is calculated with convolution of the weighted and windowed projection data with a spatially invariant "scatter convolution function". The scatter convolution function is approximated for the desired scanner geometry from pencil beam simulations and measurements using coherent and incoherent differential scatter cross section data. Several examples of phantom and medical objects scanned with third and fourth generation CT systems are discussed. In third generation scanners, scatter artifacts are effectively corrected. For fourth generation geometry with poor anti-scatter collimation, object scatter artifacts are strongly reduced.

Journal ArticleDOI
TL;DR: This paper presents a means of improving the performance of this technique by estimating the distribution function of the orientation of the line passing through each point, and shows that images can be "stained" for easier visual interpretation by applying to each pixel a false color whose hue is related to the Orientation of the most prominent line segment at that point.
Abstract: Describes an approach to boundary detection in ultrasound speckle based on an image enhancement technique. The enhancement algorithm works by filtering the image with "sticks", short line segments which are varied in orientation to achieve the maximum projected value at each point. In this paper, we present three significant extensions to improve the performance of the basic method. First, we investigate the effect of varying the size and shape of the sticks. We show that these variations affect the performance of the algorithm in very fundamental ways, for example by making it more or less sensitive to thinner or more tightly curving boundaries. Second, we present a means of improving the performance of this technique by estimating the distribution function of the orientation of the line passing through each point. Finally, we show that images can be "stained" for easier visual interpretation by applying to each pixel a false color whose hue is related to the orientation of the most prominent line segment at that point. Examples are given to illustrate the performance of the different settings on a single image.

Journal ArticleDOI
TL;DR: The goal of this study was to demonstrate the importance of variations in background anatomy by quantifying its effect on a series of detection tasks and to indicate that the tradeoff between dose and image quality might be optimized by accepting a higher system noise.
Abstract: The knowledge of the relationship that links radiationdose and image quality is a prerequisite to any optimization of medicaldiagnostic radiology. Image quality depends, on the one hand, on the physical parameters such as contrast, resolution, and noise, and on the other hand, on characteristics of the observer that assesses the image. While the role of contrast and resolution is precisely defined and recognized, the influence of image noise is not yet fully understood. Its measurement is often based on imaging uniform test objects, even though real images contain anatomical backgrounds whose statistical nature is much different from test objects used to assess system noise. The goal of this study was to demonstrate the importance of variations in background anatomy by quantifying its effect on a series of detection tasks. Several types of mammographic backgrounds and signals were examined by psychophysical experiments in a two-alternative forced-choice detection task. According to hypotheses concerning the strategy used by the human observers, their signal to noise ratio was determined. This variable was also computed for a mathematical model based on the statistical decision theory. By comparing theoretical model and experimental results, the way that anatomical structure is perceived has been analyzed. Experiments showed that the observer’s behavior was highly dependent upon both system noise and the anatomical background. The anatomy partly acts as a signal recognizable as such and partly as a pure noise that disturbs the detection process. This dual nature of the anatomy is quantified. It is shown that its effect varies according to its amplitude and the profile of the object being detected. The importance of the noisy part of the anatomy is, in some situations, much greater than the system noise. Hence, reducing the system noise by increasing the dose will not improve task performance. This observation indicates that the tradeoff between dose and image quality might be optimized by accepting a higher system noise. This could lead to a better resolution, more contrast, or less dose.

Journal ArticleDOI
TL;DR: In this article, a method to determine the optimal step length for an iterative algorithm is proposed for electrical capacitance tomography, and the efficiency of the method has been demonstrated experimentally.
Abstract: Due to the 'soft-field' nature of electrical capacitance tomography, it is necessary to employ an iterative approach for image reconstruction in order to obtain good-quality images. In an iterative algorithm it is important to determine the gain factor, i.e., the step length approaching the converging point, because it may either cause divergence or slow down the iterative process. Usually the step length is fixed. In this communication, a method to determine the optimal step length is derived for an iterative algorithm. The efficiency of the method has been demonstrated experimentally.

Journal ArticleDOI
TL;DR: A new methodology based on least squares estimation is proposed to correct the nonlinear distortion in the endoscopic images and provides high-speed response and forms a key step toward online camera calibration, which is required for accurate quantitative analysis of the images.
Abstract: Images captured with a typical endoscope show spatial distortion, which necessitates distortion correction for subsequent analysis. Here, a new methodology based on least squares estimation is proposed to correct the nonlinear distortion in the endoscopic images. A mathematical model based on polynomial mapping is used to map the images from distorted image space onto the corrected image space. The model parameters include the polynomial coefficients, distortion center, and corrected center. The proposed method utilizes a line search approach of global convergence for the iterative procedure to obtain the optimum expansion coefficients. A new technique to find the distortion center of the image based on curvature criterion is presented. A dual-step approach comprising token matching and integrated neighborhood search is also proposed for accurate extraction of the centers of the dots contained in a rectangular grid, used for the model parameter estimation. The model parameters were verified with different grid patterns. The distortion correction model is applied to several gastrointestinal images and the results are presented. The proposed technique provides high-speed response and forms a key step toward online camera calibration, which is required for accurate quantitative analysis of the images.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed coding and rate-shaping systems can provide significant subjective and objective gains over conventional approaches.
Abstract: This paper first proposes a computationally efficient spatial directional interpolation scheme, which makes use of the local geometric information extracted from the surrounding blocks. The proposed error-concealment scheme produces results that are superior to those of other approaches, in terms of both peak signal-to-noise ratio and visual quality. Then a novel approach that incorporates this directional spatial interpolation at the receiver is proposed for block-based low-bit-rate coding. The key observation is that the directional spatial interpolation at the receiver can reconstruct faithfully a large percentage of the blocks that are intentionally not sent. A rate-distortion optimal way to drop the blocks is shown. The new approach can be made compatible with standard JPEG and MPEG decoders. The block-dropping approach also has an important application for dynamic rate shaping in transmitting precompressed videos over channels of dynamic bandwidth. Experimental results show that the proposed coding and rate-shaping systems can provide significant subjective and objective gains over conventional approaches.

Journal ArticleDOI
TL;DR: A new post‐processing strategy is presented that can reduce artifacts due to in‐plane, rigid‐body motion in times comparable to that required to re‐scan a patient.
Abstract: Patient motion during the acquisition of a magnetic resonance image can cause blurring and ghosting artifacts in the image. This paper presents a new post-processing strategy that can reduce artifacts due to in-plane, rigid-body motion in times comparable to that required to re-scan a patient. The algorithm iteratively determines unknown patient motion such that corrections for this motion provide the best image quality, as measured by an entropy-related focus criterion. The new optimization strategy features a multi-resolution approach in the phase-encode direction, separate successive one-dimensional searches for rotations and translations, and a novel method requiring only one re-gridding calculation for each rotation angle considered. Applicability to general rigid-body in-plane rotational and translational motion and to a range of differently weighted images and k-space trajectories is demonstrated. Motion artifact reduction is observed for data from a phantom, volunteers, and patients.

Journal ArticleDOI
TL;DR: A new automatic target recognition (ATR) system has been developed that provides significantly improved target recognition performance compared with ATR systems that use conventional synthetic aperture radar (SAR) image-processing techniques.
Abstract: Using advanced technology, a new automatic target recognition (ATR) system has been developed that provides significantly improved target recognition performance compared with ATR systems that use conventional synthetic aperture radar (SAR) image-processing techniques. This significant improvement in target recognition performance is achieved by using a new superresolution image-processing technique that enhances SAR image resolution (and image quality) prior to performing target recognition. A computationally efficient two-level implementation of a template-based classifier is used to perform target recognition. The improvement in target recognition performance achieved using superresolution image processing in this new ATR system is quantified.

Journal ArticleDOI
TL;DR: The results of this study showed that it was possible to maintain the perceived lightness contrast of the images by using sigmoidal contrast enhancement functions to selectively rescale images from a source device with a full dynamic range into a destination devices with a limited dynamic range.
Abstract: In color gamut mapping of pictorial images, the lightness rendition of the mapped images plays a major role in the quality of the final image. For color gamut mapping tasks, where the goal is to produce a match to the original scene, it is important to maintain the perceived lightness contrast of the original image. Typical lightness remapping functions such as linear compression, soft compression, and hard clipping reduce the lightness contrast of the input image. Sigmoidal remapping functions were utilized to overcome the natural loss in perceived lightness contrast that results when an image from a full dynamic range device is scaled into the limited dynamic range of a destination device. These functions were tuned to the particular lightness characteristics of the images used and the selected dynamic ranges. The sigmoidal remapping functions were selected based on an empirical contrast enhancement model that was developed for the result of a psychophysical adjustment experiment. The results of this study showed that it was possible to maintain the perceived lightness contrast of the images by using sigmoidal contrast enhancement functions to selectively rescale images from a source device with a full dynamic range into a destination device with a limited dynamic range.© (1998) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: Quantitative measures of recovered inclusion shape and position reveal a systematic improvement in image reconstruction quality when the nonactive antenna-compensation model is employed, and improvements in electrical property value recovery of localized heterogeneities are also observed.
Abstract: For pt. I see ibid., vol. 18, no. 6, p. 496 (1999). Model-based imaging techniques utilizing microwave signal illumination rely heavily on the ability to accurately represent the wave propagation with a suitable numerical model. To date, the highest quality images from the authors' prototype system have been achieved utilizing a single transmitter/single receiver measurement system where both antennas are manually repositioned to facilitate multiple illuminations of the imaging region, thus requiring long data acquisition times. In an effort to develop a system that can acquire data in a real time manner, a 32-channel network has been fabricated with all ports capable of being electronically selected for either transmit or receive mode. The presence of a complete array of antenna elements at data collection time perturbs the field distributions being measured, which can subsequently degrade the image reconstruction due to increased data-model mismatch. Incorporating the nonactive antenna-compensation model from Part I of this paper into the authors' hybrid element near field image reconstruction algorithm is shown to restore image quality when fixed antenna-array data acquisition is used. Improvements are most dramatic for inclusions located in near proximity to the antenna array itself, although cases of improvement in the recovery of centered heterogeneities are also illustrated. Increases in the frequency of illumination are found to warrant an increased need for nonactive antenna compensation. Quantitative measures of recovered inclusion shape and position reveal a systematic improvement in image reconstruction quality when the nonactive antenna-compensation model is employed. Improvements in electrical property value recovery of localized heterogeneities are also observed. Image reconstructions in freshly excised breast tissue illustrate the applicability of the approach when used with the authors' two-dimensional microwave imaging system.

Proceedings ArticleDOI
24 Oct 1999
TL;DR: A piecewise mapping function according to human visual sensitivity of contrast is used so that adaptivity can be achieved without extra bits for overhead in the embedding of multimedia data into a host image.
Abstract: We propose in this paper a novel method for embedding multimedia data (including audio, image, video, or text; compressed or non-compressed) into a host image. The classical LSB concept is adopted, but with the number of LSBs adapting to pixels of different graylevels. A piecewise mapping function according to human visual sensitivity of contrast is used so that adaptivity can be achieved without extra bits for overhead. The leading information for data decoding is few, no more than 3 bytes. Experiments show that a large amount of bit streams (nearly 30%-45% of the host image) can be embedded without sever degradation of the image quality (33-40 dB, depending on the volume of embedded bits).