scispace - formally typeset
Search or ask a question

Showing papers on "Image quality published in 2000"


Book
01 Jan 2000
TL;DR: The Handbook of Image and Video Processing contains a comprehensive and highly accessible presentation of all essential mathematics, techniques, and algorithms for every type of image and video processing used by scientists and engineers.
Abstract: 1.0 INTRODUCTION 1.1 Introduction to Image and Video Processing (Bovik) 2.0 BASIC IMAGE PROCESSING TECHNIQUES 2.1 Basic Gray-Level Image Processing (Bovik) 2.2 Basic Binary Image Processing (Desai/Bovik) 2.3 Basic Image Fourier Analysis and Convolution (Bovik) 3.0 IMAGE AND VIDEO PROCESSING Image and Video Enhancement and Restoration 3.1 Basic Linear Filtering for Image Enhancement (Acton/Bovik) 3.2 Nonlinear Filtering for Image Enhancement (Arce) 3.3 Morphological Filtering for Image Enhancement and Detection (Maragos) 3.4 Wavelet Denoising for Image Enhancement (Wei) 3.5 Basic Methods for Image Restoration and Identification (Biemond) 3.6 Regularization for Image Restoration and Reconstruction (Karl) 3.7 Multi-Channel Image Recovery (Galatsanos) 3.8 Multi-Frame Image Restoration (Schulz) 3.9 Iterative Image Restoration (Katsaggelos) 3.10 Motion Detection and Estimation (Konrad) 3.11 Video Enhancement and Restoration (Lagendijk) Reconstruction from Multiple Images 3.12 3-D Shape Reconstruction from Multiple Views (Aggarwal) 3.13 Image Stabilization and Mosaicking (Chellappa) 4.0 IMAGE AND VIDEO ANALYSIS Image Representations and Image Models 4.1 Computational Models of Early Human Vision (Cormack) 4.2 Multiscale Image Decomposition and Wavelets (Moulin) 4.3 Random Field Models (Zhang) 4.4 Modulation Models (Havlicek) 4.5 Image Noise Models (Boncelet) 4.6 Color and Multispectral Representations (Trussell) Image and Video Classification and Segmentation 4.7 Statistical Methods (Lakshmanan) 4.8 Multi-Band Techniques for Texture Classification and Segmentation (Manjunath) 4.9 Video Segmentation (Tekalp) 4.10 Adaptive and Neural Methods for Image Segmentation (Ghosh) Edge and Boundary Detection in Images 4.11 Gradient and Laplacian-Type Edge Detectors (Rodriguez) 4.12 Diffusion-Based Edge Detectors (Acton) Algorithms for Image Processing 4.13 Software for Image and Video Processing (Evans) 5.0 IMAGE COMPRESSION 5.1 Lossless Coding (Karam) 5.2 Block Truncation Coding (Delp) 5.3 Vector Quantization (Smith) 5.4 Wavelet Image Compression (Ramchandran) 5.5 The JPEG Lossy Standard (Ansari) 5.6 The JPEG Lossless Standard (Memon) 5.7 Multispectral Image Coding (Bouman) 6.0 VIDEO COMPRESSION 6.1 Basic Concepts and Techniques of Video Coding (Barnett/Bovik) 6.2 Spatiotemporal Subband/Wavelet Video Compression (Woods) 6.3 Object-Based Video Coding (Kunt) 6.4 MPEG-I and MPEG-II Video Standards (Ming-Ting Sun) 6.5 Emerging MPEG Standards: MPEG-IV and MPEG-VII (Kossentini) 7.0 IMAGE AND VIDEO ACQUISITION 7.1 Image Scanning, Sampling, and Interpolation (Allebach) 7.2 Video Sampling and Interpolation (Dubois) 8.0 IMAGE AND VIDEO RENDERING AND ASSESSMENT 8.1 Image Quantization, Halftoning, and Printing (Wong) 8.2 Perceptual Criteria for Image Quality Evaluation (Pappas) 9.0 IMAGE AND VIDEO STORAGE, RETRIEVAL AND COMMUNICATION 9.1 Image and Video Indexing and Retrieval (Tsuhan Chen) 9.2 A Unified Framework for Video Browsing and Retrieval (Huang) 9.3 Image and Video Communication Networks (Schonfeld) 9.4 Image Watermarking (Pitas) 10.0 APPLICATIONS OF IMAGE PROCESSING 10.1 Synthetic Aperture Radar Imaging (Goodman/Carrera) 10.2 Computed Tomography (Leahy) 10.3 Cardiac Imaging (Higgins) 10.4 Computer-Aided Detection for Screening Mammography (Bowyer) 10.5 Fingerprint Classification and Matching (Jain) 10.6 Probabilistic Models for Face Recognition (Pentland/Moghaddam) 10.7 Confocal Microscopy (Merchant/Bartels) 10.8 Automatic Target Recognition (Miller) Index

1,678 citations


Journal ArticleDOI
TL;DR: It is interesting to note that JPEG2000 is being designed to address the requirements of a diversity of applications, e.g. Internet, color facsimile, printing, scanning, digital photography, remote sensing, mobile applications, medical imagery, digital library and E-commerce.
Abstract: With the increasing use of multimedia technologies, image compression requires higher performance as well as new features. To address this need in the specific area of still image encoding, a new standard is currently being developed, the JPEG2000. It is not only intended to provide rate-distortion and subjective image quality performance superior to existing standards, but also to provide features and functionalities that current standards can either not address efficiently or in many cases cannot address at all. Lossless and lossy compression, embedded lossy to lossless coding, progressive transmission by pixel accuracy and by resolution, robustness to the presence of bit-errors and region-of-interest coding, are some representative features. It is interesting to note that JPEG2000 is being designed to address the requirements of a diversity of applications, e.g. Internet, color facsimile, printing, scanning, digital photography, remote sensing, mobile applications, medical imagery, digital library and E-commerce.

1,485 citations


Journal ArticleDOI
TL;DR: It is demonstrated how to decouple distortion and additive noise degradation in a practical image restoration system and the nonlinear NQM is a better measure of visual quality than peak signal-to noise ratio (PSNR) and linear quality measures.
Abstract: We model a degraded image as an original image that has been subject to linear frequency distortion and additive noise injection. Since the psychovisual effects of frequency distortion and noise injection are independent, we decouple these two sources of degradation and measure their effect on the human visual system. We develop a distortion measure (DM) of the effect of frequency distortion, and a noise quality measure (NQM) of the effect of additive noise. The NQM, which is based on Peli's (1990) contrast pyramid, takes into account the following: 1) variation in contrast sensitivity with distance, image dimensions, and spatial frequency; 2) variation in the local luminance mean; 3) contrast interaction between spatial frequencies; 4) contrast masking effects. For additive noise, we demonstrate that the nonlinear NQM is a better measure of visual quality than peak signal-to noise ratio (PSNR) and linear quality measures. We compute the DM in three steps. First, we find the frequency distortion in the degraded image. Second, we compute the deviation of this frequency distortion from an allpass response of unity gain (no distortion). Finally, we weight the deviation by a model of the frequency response of the human visual system and integrate over the visible frequencies. We demonstrate how to decouple distortion and additive noise degradation in a practical image restoration system.

820 citations


Book
01 Nov 2000
TL;DR: System concepts System components Image reconstruction Spiral CT Multi-slice spiral CT Cone-beam CT Dynamic CT Quantitative CT Dual source CT Dual energy CT Flat detector CT Micro CT Image quality Spatial resolution Contrast Pixel noise Homogeneity Routine and special applications 3D displays Post-processing Quality assurance.
Abstract: System concepts System components Image reconstruction Spiral CT Multi-slice spiral CT Cone-beam CT Dynamic CT Quantitative CT Dual source CT Dual energy CT Flat detector CT Micro CT Image quality Spatial resolution Contrast Pixel noise Homogeneity Routine and special applications 3D displays Post-processing Quality assurance

641 citations


Journal ArticleDOI
TL;DR: It is shown that the position-dependent bias in a numerical study can lead to apparent strains of the order of 40% of the actual strain level, and methods are presented to reduce this bias to acceptable levels.
Abstract: Recently, digital image correlation as a tool for surface defor- mation measurements has found widespread use and acceptance in the field of experimental mechanics. The method is known to reconstruct displacements with subpixel accuracy that depends on various factors such as image quality, noise, and the correlation algorithm chosen. How- ever, the systematic errors of the method have not been studied in detail. We address the systematic errors of the iterative spatial domain cross- correlation algorithm caused by gray-value interpolation. We investigate the position-dependent bias in a numerical study and show that it can lead to apparent strains of the order of 40% of the actual strain level. Furthermore, we present methods to reduce this bias to acceptable lev- els. © 2000 Society of Photo-Optical Instrumentation Engineers. (S0091-3286(00)00911-9)

602 citations


Journal ArticleDOI
TL;DR: It is found that when optimizing for an exponential packet loss model with a mean loss rate of 20% and using a total rate of 0.2 bits per pixel on the Lenna image, good image quality can be obtained even when 40% of transmitted packets are lost.
Abstract: We present the unequal loss protection (ULP) framework in which unequal amounts of forward error correction are applied to progressive data to provide graceful degradation of image quality as packet losses increase. We develop a simple algorithm that can find a good assignment within the ULP framework. We use the set partitioning in hierarchical trees coder in this work, but our algorithm can protect any progressive compression scheme. In addition, we promote the use of a PMF of expected channel conditions so that our system can work with almost any model or estimate of packet losses. We find that when optimizing for an exponential packet loss model with a mean loss rate of 20% and using a total rate of 0.2 bits per pixel on the Lenna image, good image quality can be obtained even when 40% of transmitted packets are lost.

504 citations


Proceedings ArticleDOI
01 Aug 2000
TL;DR: This paper proposes new rendering techniques that significantly improve both performance and image quality of the 2D-texture based approach and demonstrates how multi-stage rasterization hardware can be used to efficiently render shaded isosurfaces and to compute diffuse illumination for semi-transparent volume rendering at interactive frame rates.
Abstract: Interactive direct volume rendering has yet been restricted to high-end graphics workstations and special-purpose hardware, due to the large amount of trilinear interpolations, that are necessary to obtain high image quality. Implementations that use the 2D-texture capabilities of standard PC hardware, usually render object-aligned slices in order to substitute trilinear by bilinear interpolation. However the resulting images often contain visual artifacts caused by the lack of spatial interpolation. In this paper we propose new rendering techniques that significantly improve both performance and image quality of the 2D-texture based approach. We will show how in ulti-texturing capabilitiesof modern consumer PC graphboards are exploited to enable in teractive high quality volume visualization on low-cost hardware. Furthermore we demonstrate how multi-stage rasterization hardware can be used to efficiently render shaded isosurfaces and to compute diffuse illumination for semi-transparent volume rendering at interactive frame rates.

336 citations


Book ChapterDOI
14 Jun 2000
TL;DR: Deformable models have been extensively studied and widely used in medical image segmentation and offer robustness to both image noise and boundary gaps and allow integrating boundary elements into a coherent and consistent mathematical description.
Abstract: In the past four decades, computerized image segmentation has played an increasingly important role in medical imaging. Segmented images are now used routinely in a multitude of different applications, such as the quantification of tissue volumes [1], diagnosis [2], localization of pathology [3], study of anatomical structure [4, 5], treatment planning [6], partial volume correction of functional imaging data [7], and computer-integrated surgery [8, 9]. Image segmentation remains a difficult task, however, due to both the tremendous variability of object shapes and the variation in image quality (see Fig. 3.1). In particular, medical images are often corrupted by noise and sampling artifacts, which can cause considerable difficulties when applying classical segmentation techniques such as edge detection and thresholding. As a result, these techniques either fail completely or require some kind of postprocessing step to remove invalid object boundaries in the segmentation results. To address these difficulties, deformable models have been extensively studied and widely used in medical image segmentation, with promising results. Deformable models are curves or surfaces defined within an image domain that can move under the influence of internal forces, which are defined within the curve or surface itself, and external forces, which are computed from the image data. The internal forces are designed to keep the model smooth during deformation. The external forces are defined to move the model toward an object boundary or other desired features within an image. By constraining extracted boundaries to be smooth and incorporating other prior information about the object shape, deformable models offer robustness to both image noise and boundary gaps and allow integrating boundary elements into a coherent and consistent mathematical description. Such a boundary description can then be readily used by subsequent applications. Moreover, since deformable models are implemented on the continuum, the resulting boundary representation can achieve subpixel accuracy, a highly desirable property for medical imaging applications. Figure 3.2 shows two examples of using deformable models to extract object boundaries from medical images. The result is a parametric curve in Fig. 3.2(a) and a parametric surface in Fig. 3.2(b).

291 citations


Journal ArticleDOI
TL;DR: Diffraction enhanced imaging is a new, synchrotron-based, x-ray radiography method that uses monochromatic, fan-shaped beams, with an analyser crystal positioned between the subject and the detector, and has the potential for use in clinical radiography and in industry.
Abstract: Diffraction enhanced imaging (DEI) is a new, synchrotron-based, x-ray radiography method that uses monochromatic, fan-shaped beams, with an analyser crystal positioned between the subject and the detector. The analyser allows the detection of only those x-rays transmitted by the subject that fall into the acceptance angle (central part of the rocking curve) of the monochromator/analyser system. As shown by Chapman et al, in addition to the x-ray attenuation, the method provides information on the out-of-plane angular deviation of x-rays. New images result in which the image contrast depends on the x-ray index of refraction and on the yield of small-angle scattering, respectively. We implemented DEI in the tomography mode at the National Synchrotron Light Source using 22 keV x-rays, and imaged a cylindrical acrylic phantom that included oil-filled, slanted channels. The resulting 'refraction CT image' shows the pure image of the out-of-plane gradient of the x-ray index of refraction. No image artefacts were present, indicating that the CT projection data were a consistent set. The 'refraction CT image' signal is linear with the gradient of the refractive index, and its value is equal to that expected. The method, at the energy used or higher, has the potential for use in clinical radiography and in industry.

258 citations


Proceedings ArticleDOI
TL;DR: Li et al. as mentioned in this paper proposed a semi-fragile watermarking technique that accepts JPEG lossy compression on the watermarked image to a pre-determined quality factor, and rejects malicious attacks.
Abstract: In this paper, we propose a semi-fragile watermarking technique that accepts JPEG lossy compression on the watermarked image to a pre-determined quality factor, and rejects malicious attacks. The authenticator can identify the positions of corrupted blocks, and recover them with approximation of the original ones. In addition to JPEG compression, adjustments of the brightness of the image within reasonable ranges, are also acceptable using the proposed authenticator. The security of the proposed method is achieved by using the secret block mapping function which controls the signature generating/embedding processes. Our authenticator is based on two invariant properties of DCT coefficients before and after JPEG compressions. They are deterministic so that no probabilistic decision is needed in the system. The first property shows that if we modify a DCT coefficient to an integral multiple of a quantization step, which is larger than the steps used in later JPEG compressions, then this coefficient can be exactly reconstructed after later acceptable JPEG compression. The second one is the invariant relationships between two coefficients in a block pair before and after JPEG compression. Therefore, we can use the second property to generate authentication signature, and use the first property to embed it as watermarks. There is no perceptible degradation between the watermarked image and the original. In additional to authentication signatures, we can also embed the recovery bits for recovering approximate pixel values in corrupted areas. Our authenticator utilizes the compressed bitstream, and thus avoids rounding errors in reconstructing DCT coefficients. Experimental results showed the effectiveness of this system. The system also guaranies no false alarms, i.e., no acceptable JPEG compression is rejected.

258 citations


Journal ArticleDOI
TL;DR: In this article, the authors developed and validated dedicated cardiac reconstruction algorithms for imaging the heart with subsecond multi-slice spiral CT utilizing electrocardiogram (ECG) information.
Abstract: Subsecond spiral computed tomography (CT) offers great potential for improving heart imaging. The new multi-row detector technology adds significantly to this potential. We therefore developed and validated dedicated cardiac reconstruction algorithms for imaging the heart with subsecond multi-slice spiral CT utilizing electrocardiogram (ECG) information. The single-slice cardiac z-interpolation algorithms 180 degrees CI and 180 degrees CD [Med. Phys. 25, 2417-2431 (1998)] were generalized to allow imaging of the heart for M-slice scanners. Two classes of algorithms were investigated: 180 degrees MCD (multi-slice cardio delta), a partial scan reconstruction of 180 degrees + delta data with a or = 70 min(-1) the partial scan approach 180 degrees MCD yields unsatisfactory results as compared to 180 degrees MCI. Our theoretical considerations show that a freely selectable scanner rotation time chosen as a function of the patient's heart rate, would further improve the relative temporal resolution and thus further reduce motion artifacts. In our case an additional 0.6 s mode besides the available 0.5 s mode would be very helpful. Moreover, if technically feasible, lower rotation times such as 0.3 s or even less would result in improved image quality. The use of multi-slice techniques for cardiac CT together with the new z-interpolation methods improves the quality of heart imaging significantly. The high temporal resolution of 180 degrees MCI is adequate for spatial and temporal tracking of anatomic structures of the heart (4D reconstruction).

Journal ArticleDOI
TL;DR: The methodology is sufficiently general that examination of optimal geometry for other FPI applications is possible and the degree to which increased exposure can be used to compensate for x-ray scatter degradation is quantified.
Abstract: A theoretical method is presented that allows identification of optimal x-ray imaginggeometry, considering the effects of x-ray source distribution, imaging task, x-rayscatter, and imagerdetective quantum efficiency (DQE). Each of these factors is incorporated into the ICRU-recommended figure of merit for image quality, the detectability index, which is maximized to determine the optimal system configuration. Cascaded systems analysis of flat-panel imagers (FPIs) is extended to incorporate the effects of x-rayscatter directly in the DQE, showing that x-rayscatter degrades DQE as an additive noise source. Optimal magnification is computed for FPI configurations appropriate to (but not limited to) cone-beam computed tomography(CBCT). The sensitivity of the results is examined as a function of focal spot size, imaging task (e.g., ideal observer detection or discrimination tasks), x-rayscatter fraction, detector resolution, and additive noise. Nominal conditions for FPI-CBCT result in optimal magnification of ∼1.4–1.6, depending primarily on the magnitude of the x-rayscatter fraction. The methodology is sufficiently general that examination of optimal geometry for other FPI applications (e.g., chest radiography, fluoroscopy, and mammography) is possible. The degree to which increased exposure can be used to compensate for x-rayscatter degradation is quantified.

Journal ArticleDOI
TL;DR: The dose in computed tomography should be reduced substantially by technical measures without sacrificing image quality, and attenuation-based on-line modulation of tube current is an efficient and practical means for this.
Abstract: This study investigated the potential of attenuation-based on-line modulation of tube current to reduce the dose of computed tomography (in milliamperes) without loss in image quality The dose can be reduced for non-circular patient cross-sections by reducing the tube current at the angular positions at which the diameter through the patient diameter is smallest We investigated a new technical approach with attenuation-based on-line modulation of tube current Computed tomographic projection data were analyzed to determine the optimal milliampere values for each projection angle in real time, instead of performing prior measurements with localizer radiographs We compared image quality, noise pattern, and dose for standard scans and for scans with attenuation-based on-line modulation of tube current in a group of 30 radiation therapy patients Six different anatomical regions were examined: head, shoulder, thorax, abdomen, pelvis, and extremities (knee) Image quality was evaluated by four radiologists in a blinded fashion We found the dose to be reduced typically by 15–50 % In general, no deterioration in image quality was observed Thus the dose in computed tomography be reduced substantially by technical measures without sacrificing image quality Attenuation-based on-line modulation of tube current is an efficient and practical means for this

Journal ArticleDOI
TL;DR: Two light-field compression schemes are presented and the first proposed coder is based on video-compression techniques that have been modified to code the four-dimensional light-fields data structure efficiently, establishing a hierarchical structure among the light- field images.
Abstract: Two light-field compression schemes are presented. The codecs are compared with regard to compression efficiency and rendering performance. The first proposed coder is based on video-compression techniques that have been modified to code the four-dimensional light-field data structure efficiently. The second coder relies entirely on disparity-compensated image prediction, establishing a hierarchical structure among the light-field images. The coding performance of both schemes is evaluated using publicly available light fields of synthetic, as well as real-world, scenes. Compression ratios vary between 100:1 and 2000:1, depending on the reconstruction quality and light-field scene characteristics.

Journal ArticleDOI
TL;DR: The SonoWand system enables neuronavigation through direct use of intraoperative three-dimensional ultrasound with clinical value similar to that of preoperative magnetic resonance imaging.
Abstract: OBJECTIVE: We have integrated a neuronavigation system into an ultrasound scanner and developed a single-rack system that enables the surgeon to perform frameless and armless stereotactic neuronavigation using intraoperative three-dimensional ultrasound data as well as preoperative magnetic resonance or computed tomographic images. The purpose of this article is to describe our two-rack prototype and present the results of our work on image quality enhancement. DESCRIPTION OF INSTRUMENTATION: The system consists of a high-end ultrasound scanner, a modest-cost computer, and an optical positioning/digitizer system. Special technical and clinical efforts have been made to achieve high image quality. A special interface between the ultrasound instrument and the navigation computer ensures rapid transfer of digital three-dimensional data with no loss of image quality. OPERATIVE TECHNIQUE: The positioning system tracks the position and orientation of the patient, the ultrasound probe, the pointer, and various surgical instruments. This makes it possible to update the three-dimensional map during surgery and navigate by ultrasound data in a similar manner as with magnetic resonance data. METHODS: The two-rack prototype has been used for clinical testing since November 1997 at the University Hospital in Trondheim. EXPERIENCE AND RESULTS: The image quality improvements have enabled us, in most cases, to extract information from ultrasound with clinical value similar to that of preoperative magnetic resonance imaging. The overall clinical accuracy of the ultrasound-based navigation system is expected to be comparable to or better than that of a magnetic resonance imaging-based system. CONCLUSION: The SonoWand system enables neuronavigation through direct use of intraoperative threedimensional ultrasound. Further research will be necessary to explore the potential clinical value and the limitations of this technology. (Neurosurgery 47:1373‐1380, 2000)

Book ChapterDOI
16 Feb 2000
TL;DR: In this article, the authors showed that the image signal-to-noise ratio (SNR) is ultimately limited by the number of quanta used to create the image.
Abstract: A wide variety of both digital and nondigital medical-imaging systems are now in clinical use and many new system designs are under development. These are all complex systems, with multiple physical processes involved in the conversion of an input signal (e.g., x rays) to the final output image viewed by the interpreting physician. For every system, a high-quality image is obtained only when all processes are properly designed so as to ensure accurate transfer of the image signal and noise from input to output. An important aspect of imaging science is to understand the fundamental physics and engineering principles of these processes, and to predict how they influence final image quality. For instance, it has been known since the work of Rose [1-€“4], Shaw [5], and others that the image signal-to-noise ratio (SNR) is ultimately limited by the number of quanta used to create the image. This is illustrated in Figure 2.1, showing the improvement in image quality as the number of x-ray quanta used to produce images of a skull phantom is increased from 45 to 6720 quanta/ˆ•mm2. Negligible image noise was added by the imaging system.

Journal ArticleDOI
TL;DR: The authors describe the possibilities of fast 3-D-reconstruction of high-contrast objects with high spatial resolution from only a small series of two-dimensional planar radiographs from an open, mechanically unstable C-arm system.
Abstract: Increasingly, three dimensional (3-D) imaging technologies are used in medical diagnosis, for therapy planning, and during interventional procedures. The authors describe the possibilities of fast 3-D-reconstruction of high-contrast objects with high spatial resolution from only a small series of two-dimensional (2-D) planar radiographs. The special problems arising from the intended use of an open, mechanically unstable C-arm system are discussed. For the description of the irregular sampling geometry, homogeneous coordinates are used thoroughly. The well-known Feldkamp algorithm is modified to incorporate corresponding projection matrices without any decomposition into intrinsic and extrinsic parameters. Some approximations to speed up the whole reconstruction procedure and the tradeoff between image quality and computation time are also considered. Using standard hardware the reconstruction of a 256/sup 3/ cube is now possible within a few minutes, a time that is acceptable during interventions. Examples for cranial vessel imaging from some clinical test installations will be shown as well as promising results for bone imaging with a laboratory C-arm system.

Journal ArticleDOI
TL;DR: The encoding efficiency of single‐shot and segmented echo‐planar imaging is tripled by means of a 6‐element receiver coil array and the feasibility of this approach is verified for double oblique cardiac real‐time imaging of human subjects at rest as well as under physiological stress.
Abstract: Sensitivity encoding is used to improve the performance of real-time MRI. The encoding efficiency of single-shot and segmented echo-planar imaging is tripled by means of a 6-element receiver coil array. The feasibility of this approach is verified for double oblique cardiac real-time imaging of human subjects at rest as well as under physiological stress. Sample images are presented with scan times per image down to 13 msec at a spatial resolution of 4.1 mm, and 27 msec at a resolution of 2.6 mm. Moreover, multiple slice real-time imaging is demonstrated at a rate of 38 double-frames per second.

Patent
02 Nov 2000
TL;DR: In this article, the authors provide laser eye surgery devices, systems, and methods which measure the refractive error in the eye before, during, and/or after vision correction surgery.
Abstract: The invention provides laser eye surgery devices, systems, and methods which measure the refractive error in the eye before, during, and/or after vision correction surgery. The invention allows adjustments during the vision correction operation, and allows qualitative and/or quantitative measurements of the progress of photorefractive treatments by projecting and imaging reference images though the cornea and other components of the ocular optical system. A slope of an image quality value such as an Optical Transfer Function may be monitored during the procedure to help determine when to terminate treatment.

Journal ArticleDOI
TL;DR: This work proposes an approximate cone-beam algorithm which uses virtual reconstruction planes tilted to optimally fit 180° spiral segments, i.e., the advanced single-slice rebinning (ASSR) algorithm, a modification of the single- slice rebinning algorithm proposed by Noo et al.
Abstract: To achieve higher volume coverage at improved z-resolution in computed tomography (CT), systems with a large number of detector rows are demanded However, handling an increased number of detector rows, as compared to today's four-slice scanners, requires to accounting for the cone geometry of the beams Many so-called cone-beam reconstruction algorithms have been proposed during the last decade None met all the requirements of the medical spiral cone-beam CT in regard to the need for high image quality, low patient dose and low reconstruction times We therefore propose an approximate cone-beam algorithm which uses virtual reconstruction planes tilted to optimally fit 180 degrees spiral segments, ie, the advanced single-slice rebinning (ASSR) algorithm Our algorithm is a modification of the single-slice rebinning algorithm proposed by Noo et al [Phys Med Biol 44, 561-570 (1999)] since we use tilted reconstruction slices instead of transaxial slices to approximate the spiral path Theoretical considerations as well as the reconstruction of simulated phantom data in comparison to the gold standard 180 degrees LI (single-slice spiral CT) were carried out Image artifacts, z-resolution as well as noise levels were evaluated for all simulated scanners Even for a high number of detector rows the artifact level in the reconstructed images remains comparable to that of 180 degrees LI Multiplanar reformations of the Defrise phantom show none of the typical cone-beam artifacts usually appearing when going to larger cone angles Image noise as well as the shape of the respective slice sensitivity profiles are equivalent to the single-slice spiral reconstruction, z-resolution is slightly decreased The ASSR has the potential to become a practical tool for medical spiral cone-beam CT Its computational complexity lies in the order of standard single-slice CT and it allows to use available 2D backprojection hardware

Book ChapterDOI
16 Feb 2000
TL;DR: Assessment of image-quality is the ability of a particular imaging device and clinical protocol to improve diagnostic accuracy, and the issue of diagnostic accuracy is beyond the scope of this chapter.
Abstract: The last few years have seen a rapid increase in the number of digital imaging devices produced for radiographic applications. With digitized video/ˆ•image intensifiers, computed radiography (CR), and more recently the advent of flat-panel imagers, an almost bewildering number of choices of digital imaging devices is available. The good news from this rapid progress in digital radiography is that devices are becoming available with image quality superior to that available just a few years ago. The challenge from this proliferation of digital devices is making the difficult decision about which device is most suitable for a particular application. Radiologists will rightly ask, “How can I know which device I should purchase for my clinical imaging needs?” Imaging physicists, on the other hand, will be concerned with verifying the image quality specifications of the various manufacturers, developing appropriate algorithms to extend the utility of the devices, and developing appropriate clinical imaging protocols to take best advantage of the devices' imaging performance. Physicists and radiologists will need to work collaboratively to determine the appropriate applications for these devices, and the performance that may be expected from them. In order to assess the performance of these devices, it is necessary to consider both physical image-quality parameters as well as the observer's perceptual response. Both of these areas are more difficult with digital devices than with screen film. Because the image has been sampled, quantized, and processed, there are a number of changes necessary in traditional measurements of image-quality. Furthermore, because there is an almost unlimited degree of image processing that can be done to the images, it becomes more difficult to gauge observer acceptance of the resulting images. There are a number of physical and observer-based measurements which can be used to gauge image-quality, and these will be considered in this chapter. This chapter is divided into four sections. First, global measures of image-quality will be addressed, using measurements in Cartesian (or image-intensity) space. Second, measures in spatial frequency space will be considered. Third, methods of assessing image processing will be considered, and last, observer assessment will be addressed. Of course, the final stage in assessment of image-quality is the ability of a particular imaging device and clinical protocol to improve diagnostic accuracy. The issue of diagnostic accuracy, while very important, is an entire field of study in itself, and is therefore beyond the scope of this chapter. The interested reader is referred to other chapters in this volume on measuring diagnostic accuracy.

Journal ArticleDOI
TL;DR: The spectrum of possible evaluation methods of importance will be described and the principles, benefits and drawbacks of some of these methods will be given together with examples of their use.
Abstract: In medical imaging, information about the patient and possible abnormalities is transferred to the radiologist in two major steps: (i) data acquisition and image formation, and (ii) processing and display. Step one is mainly dependent on technical and physical characteristics of the equipment. Step two includes the vital importance of the performance of the radiologist; i.e. how he or she detects and interprets the structures in the image. The quality of a radiographical procedure must therefore be described with regard to both these steps. The spectrum of possible evaluation methods of importance will be described. The principles, benefits and drawbacks of some of these methods will be given together with examples of their use.

Proceedings ArticleDOI
11 Jun 2000
TL;DR: A fully automatic method for the correction of intensity nonuniformity in MR images that does not require any a priori model of the tissue classes and is evaluated using both real and simulated data.
Abstract: Outlines a fully automatic method for the correction of intensity nonuniformity in MR images. This method does not require any a priori model of the tissue classes. The basic idea is that entropy is a good measure of the image quality which can be minimized in order to overcome the bias problem. Therefore, the optimal correcting field is defined by the minimum of a functional combining the restored image entropy and a measure of the field smoothness. This measure stems from the usual physical analogy with membranes. A third term added to the functional prevents the optimal field from being uniformly null. The functional is minimized using a fast annealing schedule. The performance of the method is evaluated using both real and simulated data.

Journal ArticleDOI
TL;DR: The results indicate that the FPD-based CBVCT can achieve 2.75-1p/mm spatial resolution at 0% modulation transfer function (MTF) and provide more than enough low-contrast resolution for intravenous CBV CTA imaging in the head and neck with clinically acceptable entrance exposure level.
Abstract: Preliminary evaluation of recently developed large-area flat panel detectors (FPDs) indicates that FPDs have some potential advantages: compactness, absence of geometric distortion and veiling glare with the benefits of high resolution, high detective quantum efficiency (DQE), high frame rate and high dynamic range, small image lag (<1%), and excellent linearity (/spl sim/1%). The advantages of the new FPD make it a promising candidate for cone-beam volume computed tomography (CT) angiography (CBVCTA) imaging. The purpose of this study is to characterize a prototype FPD-based imaging system for CBVCTA applications. A prototype FPD-based CBVCTA imaging system has been designed and constructed around a modified GE 8800 CT scanner. This system is evaluated for a CBVCTA imaging task in the head and neck using four phantoms and a frozen rat. The system is first characterized in terms of linearity and dynamic range of the detector. Then, the optimal selection of kVps for CBVCTA is determined and the effect of image lag and scatter on the image quality of the CBVCTA system is evaluated. Next, low-contrast resolution and high-contrast spatial resolution are measured. Finally, the example reconstruction images of a frozen rat are presented. The results indicate that the FPD-based CBVCT can achieve 2.75-1p/mm spatial resolution at 0% modulation transfer function (MTF) and provide more than enough low-contrast resolution for intravenous CBVCTA imaging in the head and neck with clinically acceptable entrance exposure level. The results also suggest that to use an FPD for large cone-angle applications, such as body angiography, further investigations are required.

Journal ArticleDOI
TL;DR: A novel technique for manipulating contrast in projection reconstruction MRI is described, implemented into a fast spin‐echo (FSE) sequence, and it is shown that multiple T2‐weighted images can be reconstructed from a single image data set.
Abstract: A novel technique for manipulating contrast in projection reconstruction MRI is described. The method takes advantage of the fact that the central region of k-space is oversampled, allowing one to choose different filters to enhance or reduce the amount that each view contributes to the central region, which dominates image contrast. The technique is implemented into a fast spin-echo (FSE) sequence, and it is shown that multiple T2-weighted images can be reconstructed from a single image data set. These images are shown to be nearly identical to those acquired with the Cartesian-sampled FSE sequence at different effective echo times. Further, it is demonstrated that T2 maps can be generated from a single image data set. This technique also has the potential to be useful in dynamic contrast enhancement studies, capable of yielding a series of images at a significantly higher effective temporal resolution than what is currently possible with other methods, without sacrificing spatial resolution. Magn Reson Med 44:825–832, 2000. © 2000 Wiley-Liss, Inc.

Journal ArticleDOI
TL;DR: The quantitative evaluation of RKT performance showed that in terms of average contrast-to-noise ratio, there is a significant improvement in image quality between original and enhanced images.
Abstract: During the last few years, optical coherence tomography (OCT) has demonstrated considerable promise as a method of high-resolution intravascular imaging. The goal of this study was to apply and to test the applicability of the rotating kernel transformation (RKT) technique to the speckle reduction and enhancement of OCT images. The technique is locally adaptive. It is based on sequential application of directional masks and selection of the maximum of all outputs. This method enhances the image features by emphasizing thin edges while suppressing a noisy background. Qualitatively, the RKT algorithm provides noticeable improvement over the original image. All processed images are smoother and have better-defined borders of media, intima, and plaque. The quantitative evaluation of RKT performance showed that in terms of average contrast-to-noise ratio, there is a significant improvement in image quality between original and enhanced images. The RKT image enhancement technique shows great promise in improving OCT images for superior boundary identification.

Journal ArticleDOI
TL;DR: Despite motion and deformation in the liver, mutual information registration is sufficiently accurate and robust for useful applications in I-MRT thermal ablation therapy.
Abstract: The authors evaluated semiautomatic, voxel-based registration methods for a new application, the assessment and optimization of interventional magnetic resonance imaging (I-MRI) guided thermal ablation of liver cancer. The abdominal images acquired on a low-field-strength, open I-MRI system contain noise, motion artifacts, and tissue deformation. Dissimilar images can be obtained as a result of different MRI acquisition techniques and/or changes induced by treatments. These features challenge a registration algorithm. The authors evaluated one manual and four automated methods on clinical images acquired before treatment, immediately following treatment, and during several follow-up studies. Images were T2-weighted, T1-weighted Gd-DTPA enhanced, T1-weighted, and short-inversion-time inversion recovery (STIR). Registration accuracy was estimated from distances between anatomical landmarks. Mutual information gave better results than entropy, correlation, and variance of gray-scale ratio. Preprocessing steps such as masking and an initialization method that used two-dimensional (2-D) registration to obtain initial transformation estimates were crucial. With proper preprocessing, automatic registration was successful with all image pairs having reasonable image quality. A registration accuracy of /spl ap/3 mm was achieved with both manual and mutual information methods. Despite motion and deformation in the liver, mutual information registration is sufficiently accurate and robust for useful applications in I-MRT thermal ablation therapy.

Proceedings ArticleDOI
03 Apr 2000
TL;DR: In this article, a pixel level image fusion performance metric is proposed to measure the accuracy with which visual information is transferred from the input images to the fused image, which is perceptually meaningful.
Abstract: This paper addresses the issue of objectively measuring the performance of pixel level image fusion systems. The proposed fusion performance metric models the accuracy with which visual information is transferred from the input images to the fused image. Experimental results clearly indicate that the metric is perceptually meaningful.

Journal ArticleDOI
TL;DR: The evaluation of 24 metrics for use in autocorrection of MR images of the rotator cuff found the entropy of the one‐dimensional gradient along the phase‐encoding direction exhibited the strongest relationship with observer ratings of MR shoulder images.
Abstract: Magnetic resonance (MR) imaging of the shoulder necessitates high spatial and contrast resolution resulting in long acquisition times, predisposing these images to degradation due to motion. Autocorrection is a new motion correction algorithm that attempts to deduce motion during imaging by calculating a metric that reflects image quality and searching for motion values that optimize this metric. The purpose of this work is to report on the evaluation of 24 metrics for use in autocorrection of MR images of the rotator cuff. Raw data from 164 clinical coronal rotator cuff exams acquired with interleaved navigator echoes were used. Four observers then scored the original and corrected images based on the presence of any motion-induced artifacts. Changes in metric values before and after navigator-based adaptive motion correction were correlated with changes in observer score using a least-squares linear regression model. Based on this analysis, the metric that exhibited the strongest relationship with observer ratings of MR shoulder images was the entropy of the one-dimensional gradient along the phase-encoding direction. We speculate (and show preliminary evidence) that this metric will be useful not only for autocorrection of shoulder MR images but also for autocorrection of other MR exams. J. Magn. Reson. Imaging 2000;11:174–181. © 2000 Wiley-Liss, Inc.

Journal ArticleDOI
TL;DR: To efficiently embed the watermark within the image without the loss of image quality and provide the robustness for the watermarks detection under attacks, a modular based spatial threshold and adjustment scheme of the wavelet coefficients has been developed in this research.
Abstract: A new watermarking scheme which incorporates wavelet and spatial transformation has been developed for digital images. This algorithm utilizes the wavelet multiresolutional structure to construct the image frequency components and the chaotic transformation as two dimensional integer vector generators for spatial transform to select the location during the watermark embedding. To efficiently embed the watermark within the image without the loss of image quality and provide the robustness for the watermark detection under attacks, a modular based spatial threshold and adjustment scheme of the wavelet coefficients has been developed in this research. Unlike other watermarking schemes which usually rely on significantly large amount of side information for watermark detection, our algorithm need only few key parameters to detect the watermark with meaningful content. Compared with some known approaches of watermarking schemes, this algorithm results in superior robustness and information protection for keeping watermark intact under image processing attacks like image compression.