scispace - formally typeset
Search or ask a question

Showing papers on "Image quality published in 2005"


Journal ArticleDOI
TL;DR: This paper proposes a novel information fidelity criterion that is based on natural scene statistics and derives a novel QA algorithm that provides clear advantages over the traditional approaches and outperforms current methods in testing.
Abstract: Measurement of visual quality is of fundamental importance to numerous image and video processing applications. The goal of quality assessment (QA) research is to design algorithms that can automatically assess the quality of images or videos in a perceptually consistent manner. Traditionally, image QA algorithms interpret image quality as fidelity or similarity with a "reference" or "perfect" image in some perceptual space. Such "full-reference" QA methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychovisual features of the human visual system (HVS), or by arbitrary signal fidelity criteria. In this paper, we approach the problem of image QA by proposing a novel information fidelity criterion that is based on natural scene statistics. QA systems are invariably involved with judging the visual quality of images and videos that are meant for "human consumption". Researchers have developed sophisticated models to capture the statistics of natural signals, that is, pictures and videos of the visual environment. Using these statistical models in an information-theoretic setting, we derive a novel QA algorithm that provides clear advantages over the traditional approaches. In particular, it is parameterless and outperforms current methods in our testing. We validate the performance of our algorithm with an extensive subjective study involving 779 images. We also show that, although our approach distinctly departs from traditional HVS-based methods, it is functionally similar to them under certain conditions, yet it outperforms them due to improved modeling. The code and the data from the subjective study are available at [1].

1,334 citations



Journal ArticleDOI
TL;DR: It is claimed that natural scenes contain nonlinear dependencies that are disturbed by the compression process, and that this disturbance can be quantified and related to human perceptions of quality.
Abstract: Measurement of image or video quality is crucial for many image-processing algorithms, such as acquisition, compression, restoration, enhancement, and reproduction. Traditionally, image quality assessment (QA) algorithms interpret image quality as similarity with a "reference" or "perfect" image. The obvious limitation of this approach is that the reference image or video may not be available to the QA algorithm. The field of blind, or no-reference, QA, in which image quality is predicted without the reference image or video, has been largely unexplored, with algorithms focussing mostly on measuring the blocking artifacts. Emerging image and video compression technologies can avoid the dreaded blocking artifact by using various mechanisms, but they introduce other types of distortions, specifically blurring and ringing. In this paper, we propose to use natural scene statistics (NSS) to blindly measure the quality of images compressed by JPEG2000 (or any other wavelet based) image coder. We claim that natural scenes contain nonlinear dependencies that are disturbed by the compression process, and that this disturbance can be quantified and related to human perceptions of quality. We train and test our algorithm with data from human subjects, and show that reasonably comprehensive NSS models can help us in making blind, but accurate, predictions of quality. Our algorithm performs close to the limit imposed on useful prediction by the variability between human subjects.

612 citations


Journal ArticleDOI
TL;DR: Results are presented to show that the proposed system provides an improvement in image quality of stereoscopic virtual views while maintaining reasonably good depth quality.
Abstract: A depth-image-based rendering system for generating stereoscopic images is proposed. One important aspect of the proposed system is that the depth maps are pre-processed using an asymmetric filter to smoothen the sharp changes in depth at object boundaries. In addition to ameliorating the effects of blocky artifacts and other distortions contained in the depth maps, the smoothing reduces or completely removes newly exposed (disocclusion) areas where potential artifacts can arise from image warping which is needed to generate images from new viewpoints. The asymmetric nature of the filter reduces the amount of geometric distortion that might be perceived otherwise. We present some results to show that the proposed system provides an improvement in image quality of stereoscopic virtual views while maintaining reasonably good depth quality.

562 citations


Proceedings ArticleDOI
18 Mar 2005
TL;DR: This paper proposes an RR image quality assessment method based on a natural image statistic model in the wavelet transform domain that uses the Kullback-Leibler distance between the marginal probability distributions of wavelet coefficients of the reference and distorted images as a measure of image distortion.
Abstract: Reduced-reference (RR) image quality measures aim to predict the visual quality of distorted images with only partial information about the reference images. In this paper, we propose an RR quality assessment method based on a natural image statistic model in the wavelet transform domain. In particular, we observe that the marginal distribution of wavelet coefficients changes in different ways for different types of image distortions. To quantify such changes, we estimate the Kullback-Leibler distance between the marginal distributions of wavelet coefficients of the reference and distorted images. A generalized Gaussian model is employed to summarize the marginal distribution of wavelet coefficients of the reference image, so that only a relatively small number of RR features are needed for the evaluation of image quality. The proposed method is easy to implement and computationally efficient. In addition, we find that many well-known types of image distortion lead to significant changes in wavelet coefficient histograms, and thus are readily detectable by our measure. The algorithm is tested with subjective ratings of a large image database that contains images corrupted with a wide variety of distortion types.

480 citations


Journal ArticleDOI
TL;DR: A validation study on statistical nonsupervised brain tissue classification techniques in magnetic resonance (MR) images demonstrates that methods relying on both intensity and spatial information are more robust to noise and field inhomogeneities and shows that simulated data results can be extended to real data.
Abstract: This paper presents a validation study on statistical nonsupervised brain tissue classification techniques in magnetic resonance (MR) images. Several image models assuming different hypotheses regarding the intensity distribution model, the spatial model and the number of classes are assessed. The methods are tested on simulated data for which the classification ground truth is known. Different noise and intensity nonuniformities are added to simulate real imaging conditions. No enhancement of the image quality is considered either before or during the classification process. This way, the accuracy of the methods and their robustness against image artifacts are tested. Classification is also performed on real data where a quantitative validation compares the methods' results with an estimated ground truth from manual segmentations by experts. Validity of the various classification methods in the labeling of the image as well as in the tissue volume is estimated with different local and global measures. Results demonstrate that methods relying on both intensity and spatial information are more robust to noise and field inhomogeneities. We also demonstrate that partial volume is not perfectly modeled, even though methods that account for mixture classes outperform methods that only consider pure Gaussian classes. Finally, we show that simulated data results can also be extended to real data.

381 citations


Journal ArticleDOI
TL;DR: An automatic exposure control mechanism that is based on real-time anatomy-dependent tube current modulation delivers good image quality with a significantly reduced radiation dose.
Abstract: PURPOSE: To prospectively compare dose reduction and image quality achieved with an automatic exposure control system that is based on both angular (x-y axis) and z-axis tube current modulation with dose reduction and image quality achieved with an angular modulation system for multi–detector row computed tomography (CT). MATERIALS AND METHODS: The study protocol was approved by the institutional review board, and oral informed consent was obtained. In two groups of 200 patients, five anatomic regions (ie, the thorax, abdomen-pelvis, abdomen-liver, lumbar spine, and cervical spine) were examined with this modulation system and a six-section multi–detector row CT scanner. Data from these patients were compared with data from 200 patients who were examined with an angular modulation system. Dose reduction by means of reduction of the mean effective tube current in 600 examinations, image noise in 200 examinations performed with each modulation system, and subjective image quality scores in 100 examinations ...

345 citations


Reference BookDOI
TL;DR: Video Quality Experts Group .
Abstract: PICTURE CODING AND HUMAN VISUAL SYSTEM FUNDAMENTALS Digital Picture Compression and Coding Structure . Introduction to Digital Picture Coding . Characteristics of Picture Data . Compression and Coding Techniques . Picture Quantization . Rate-Distortion Theory . Human Visual Systems . Digital Picture Coding Standards and Systems . Summary Fundamentals of Human Vision and Vision Modeling . Introduction . A Brief Overview of the Visual System . Color Vision . Luminance and the Perception of Light Intensity . Spatial Vision and Contrast Sensitivity . Temporal Vision and Motion . Visual Modeling . Conclusions Coding Artifacts and Visual Distortions . Introduction . Blocking Effect . Basis Image Effect . Blurring . Color Bleeding . Staircase Effect . Ringing . Mosaic Patterns . False Contouring . False Edges . MC Mismatch . Mosquito Effect . Stationary Area Fluctuations . Chrominance Mismatch . Video Scaling and Field Rate Conversion . Deinterlacing . Summary PICTURE QUALITY ASSESSMENT AND METRICS Video Quality Testing . Introduction . Subjective Assessment Methodologies . Selection of Test Materials . Selection of Participants-Subjects . Experimental Design . International Test Methods . Objective Assessment Methods . Summary Perceptual Video Quality Metrics-A Review . Introduction . Quality Factors . Metric Classification . Pixel-Based Metrics . The Psychophysical Approach . The Engineering Approach . Metric Comparisons . Conclusions and Perspectives Philosophy of Picture Quality Scale . Objective Picture Quality Scale for Image Coding . Application of PQS to a Variety of Electronic Images . Various Categories of Image Systems . Study at ITU . Conclusion Structural Similarity Based Image Quality Assessment . Structural Similarity Based Image Quality . The Structural SIMilarity (SSIM) Index . Image Quality Assessment Based on the SSIM Index . Discussions Vision Model Based Digital Video Impairment Metrics . Introduction . Vision Modeling for Impairment Measurement . Perceptual Blocking Distortion Metric . Perceptual Ringing Distortion Measure . Conclusion Computational Models for Just-Noticeable Difference . Introduction . JND with DCT Subbands . JND with Pixels . JND Model Evaluation . Conclusions No-Reference Quality Metric for Degraded and Enhanced Video . Introduction . State-of-the-Art for No-Reference Metrics . Quality Metric Components and Design . No-Reference Overall Quality Metric . Performance of the Quality Metric . Conclusions and Future Research Video Quality Experts Group . Formation . Goals . Phase I . Phase II . Continuing Work and Directions . Summary PERCEPTUAL CODING AND PROCESSING OF DIGITAL PICTURES HVS Based Perceptual Video Encoders . Introduction . Noise Visibility and Visual Masking . Architectures for Perceptual Based Coding . Standards-Specific Features . Salience/Maskability Pre-Processing . Application to Multi-Channel Encoding Perceptual Image Coding . Introduction . A Perceptual Distortion Metric Based Image Coder . Model Calibration . Performance Evaluation . Perceptual Lossless Coder . Summary Foveated Image and Video Coding . Foveated Human Vision and Foveated Image Processing . Foveation Methods . Scalable Foveated Image and Video Coding . Discussions Artifact Reduction by Post-Processing in Image Compression . Introduction . Image Compression and Coding Artifacts . Reduction of Blocking Artifacts . Reduction of Ringing Artifacts . Summary Reduction of Color Bleeding in DCT Block-Coded Video . Introduction . Detailed Analysis of the Color Bleeding Phenomenon . Description of the Post-Processor . Experimental Results-Concluding Remarks Error Resilience for Video Coding Service . Introduction to Error Resilient Coding Techniques . Error Resilient Coding Methods Compatible with MPEG-2 . Methods for Concealment of Cell Loss . Experimental Procedure . Experimental Results . Conclusions Critical Issues and Challenges . Picture Coding Structures . Vision Modeling Issues . Spatio-Temporal Masking in Video Coding . Picture Quality Assessment . Challenges in Perceptual Coder Design . Codec System Design Optimization . Summary Appendix: VQM Performance Metrics . Metrics Relating to Model Prediction Accuracy . Metrics Relating to Prediction Monotonicity of a Model . Metrics Relating to Prediction Consistency . MATLAB(R) Source Code . Supplementary Analyses INDEX

333 citations


Book ChapterDOI
01 Dec 2005

330 citations


Journal ArticleDOI
TL;DR: The intraoperative cone-beam CT images were sufficient for guidance of needles and catheters with respect to bony anatomy and improved surgical performance and confidence through 3D visualization and verification of transpedicular trajectories and tool placement.
Abstract: A mobile isocentric C-arm (Siemens PowerMobil) has been modified in our laboratory to include a large area flat-panel detector (in place of the x-ray image intensifier), providing multi-mode fluoroscopy and cone-beam computed tomography(CT)imaging capability. This platform represents a promising technology for minimally invasive, image-guided surgical procedures where precision in the placement of interventional tools with respect to bony and soft-tissue structures is critical. The image quality and performance in surgical guidance was investigated in pre-clinical evaluation in image-guided spinal surgery. The control, acquisition, and reconstruction system are described. The reproducibility of geometric calibration, essential to achieving high three-dimensional (3D) image quality, is tested over extended time scales ( 7 months ) and across a broad range in C-arm angulation (up to 45°), quantifying the effect of improper calibration on spatial resolution, soft-tissue visibility, and image artifacts. Phantom studies were performed to investigate the precision of 3D localization (viz., fiber optic probes within a vertebral body) and effect of lateral projection truncation (limited field of view) on soft-tissue detectability in image reconstructions. Pre-clinical investigation was undertaken in a specific spinal procedure (photodynamic therapy of spinal metastases) in five animal subjects (pigs). In each procedure, placement of fiber optic catheters in two vertebrae (L1 and L2) was guided by fluoroscopy and cone-beam CT. Experience across five procedures is reported, focusing on 3D image quality, the effects of respiratory motion, limited field of view, reconstruction filter, and imaging dose. Overall, the intraoperative cone-beam CTimages were sufficient for guidance of needles and catheters with respect to bony anatomy and improved surgical performance and confidence through 3D visualization and verification of transpedicular trajectories and tool placement. Future investigation includes improvement in image quality, particularly regarding x-ray scatter, motion artifacts and field of view, and integration with optical tracking and navigation systems.

325 citations


Journal ArticleDOI
TL;DR: A simple scatter correction method in which scatter fluence is estimated directly in each projection from pixel values near the edge of the detector behind the collimator leaves, which provides significant reduction in scatter artifacts without compromise in contrast-to-noise ratio (CNR).
Abstract: X-ray scatter poses a significant limitation to image quality in cone-beam CT (CBCT), resulting in contrast reduction, image artifacts, and lack of CT number accuracy. We report the performance of a simple scatter correction method in which scatter fluence is estimated directly in each projection from pixel values near the edge of the detector behind the collimator leaves. The algorithm operates on the simple assumption that signal in the collimator shadow is attributable to x-ray scatter, and the 2D scatter fluence is estimated by interpolating between pixel values measured along the top and bottom edges of the detector behind the collimator leaves. The resulting scatter fluence estimate is subtracted from each projection to yield an estimate of the primary-only images for CBCT reconstruction. Performance was investigated in phantom experiments on an experimental CBCT bench-top, and the effect on image quality was demonstrated in patient images (head, abdomen, and pelvis sites) obtained on a preclinical system for CBCT-guided radiation therapy. The algorithm provides significant reduction in scatter artifacts without compromise in contrast-to-noise ratio (CNR). For example, in a head phantom, cupping artifact was essentially eliminated, CT number accuracy was restored to within 3%, and CNR (breast-to-water) was improved by up to 50%. Similarly in a body phantom, cupping artifact was reduced by at least a factor of 2 without loss in CNR. Patient images demonstrate significantly increased uniformity, accuracy, and contrast, with an overall improvement in image quality in all sites investigated. Qualitative evaluation illustrates that soft-tissue structures that are otherwise undetectable are clearly delineated in scatter-corrected reconstructions. Since scatter is estimated directly in each projection, the algorithm is robust with respect to system geometry, patient size and heterogeneity, patient motion, etc. Operating without prior information, analytical modeling, or Monte Carlo, the technique is easily incorporated as a preprocessing step in CBCT reconstruction to provide significant scatter reduction.

Journal ArticleDOI
TL;DR: Two practical methods for the measurement of signal‐to‐noise‐ratio (SNR) performance in parallel imaging are described and the g‐factor shows qualitative agreement with theoretical predictions from the literature.
Abstract: In this work, two practical methods for the measurement of signal-to-noise-ratio (SNR) performance in parallel imaging are described. Phantoms and human studies were performed with a 32-channel cardiac coil in the context of ultrafast cardiac CINE imaging at 1.5 T using steady-state free precession (SSFP) and TSENSE. SNR and g-factor phantom measurements using a "multiple acquisition" method were compared to measurements from a "difference method". Excellent agreement was seen between the two methods, and the g-factor shows qualitative agreement with theoretical predictions from the literature. Examples of high temporal (42.6 ms) and spatial (2.1x2.1x8 mm3) resolution cardiac CINE SSFP images acquired from human volunteers using TSENSE are shown for acceleration factors up to 7. Image quality agrees qualitatively with phantom SNR measurements, suggesting an optimum acceleration of 4. With this acceleration, a cardiac function study consisting of 6 image planes (3 short-axis views, 3 long-axis views) was obtained in an 18-heartbeat breath-hold.

Book ChapterDOI
TL;DR: Both quality indices for fingerprint images are developed and by applying a quality-based weighting scheme in the matching algorithm, the overall matching performance can be improved; a decrease of 1.94% in EER is observed on the FVC2002 DB3 database.
Abstract: The performance of an automatic fingerprint authentication system relies heavily on the quality of the captured fingerprint images. In this paper, two new quality indices for fingerprint images are developed. The first index measures the energy concentration in the frequency domain as a global feature. The second index measures the spatial coherence in local regions. We present a novel framework for evaluating and comparing quality indices in terms of their capability of predicting the system performance at three different stages, namely, image enhancement, feature extraction and matching. Experimental results on the IBM-HURSLEY and FVC2002 DB3 databases demonstrate that the global index is better than the local index in the enhancement stage (correlation of 0.70 vs. 0.50) and comparative in the feature extraction stage (correlation of 0.70 vs. 0.71). Both quality indices are effective in predicting the matching performance, and by applying a quality-based weighting scheme in the matching algorithm, the overall matching performance can be improved; a decrease of 1.94% in EER is observed on the FVC2002 DB3 database.

Journal ArticleDOI
TL;DR: It was demonstrated that high image quality in CT reconstructions is possible even in systems with large geometric nonidealities, and a general analytic algorithm and corresponding calibration phantom for estimating these geometric parameters in cone-beam computed tomography (CT) systems were developed.
Abstract: Cone-beam computed tomography systems have been developed to provide in situ imaging for the purpose of guiding radiation therapy. Clinical systems have been constructed using this approach, a clinical linear accelerator (Elekta Synergy RP) and an iso-centric C-arm. Geometric calibration involves the estimation of a set of parameters that describes the geometry of such systems, and is essential for accurate image reconstruction. We have developed a general analytic algorithm and corresponding calibration phantom for estimating these geometric parameters in cone-beam computed tomography (CT) systems. The performance of the calibration algorithm is evaluated and its application is discussed. The algorithm makes use of a calibration phantom to estimate the geometric parameters of the system. The phantom consists of 24 steel ball bearings (BBs) in a known geometry. Twelve BBs are spaced evenly at 30 deg in two plane-parallel circles separated by a given distance along the tube axis. The detector (e.g., a flat panel detector) is assumed to have no spatial distortion. The method estimates geometric parameters including the position of the x-ray source, position, and rotation of the detector, and gantry angle, and can describe complex source-detector trajectories. The accuracy and sensitivity of the calibration algorithm was analyzed. The calibration algorithm estimates geometric parameters in a high level of accuracy such that the quality of CT reconstruction is not degraded by the error of estimation. Sensitivity analysis shows uncertainty of 0.01 degrees (around beam direction) to 0.3 degrees (normal to the beam direction) in rotation, and 0.2 mm (orthogonal to the beam direction) to 4.9 mm (beam direction) in position for the medical linear accelerator geometry. Experimental measurements using a laboratory bench Cone-beam CT system of known geometry demonstrate the sensitivity of the method in detecting small changes in the imaging geometry with an uncertainty of 0.1 mm in transverse and vertical (perpendicular to the beam direction) and 1.0 mm in the longitudinal (beam axis) directions. The calibration algorithm was compared to a previously reported method, which uses one ball bearing at the isocenter of the system, to investigate the impact of more precise calibration on the image quality of cone-beam CT reconstruction. A thin steel wire located inside the calibration phantom was imaged on the conebeam CT lab bench with and without perturbations in source and detector position during the scan. The described calibration method improved the quality of the image and the geometric accuracy of the object reconstructed, improving the full width at half maximum of the wire by 27.5% and increasing contrast of the wire by 52.8%. The proposed method is not limited to the geometric calibration of cone-beam CT systems but can be used for many other systems, which consist of one or more point sources and area detectors such as calibration of megavoltage (MV) treatment system (focal spot movement during the beam delivery, MV source trajectory versus gantry angle, the axis of collimator rotation, and couch motion), cross calibration between Kilovolt imaging and MV treatment system, and cross calibration between multiple imaging systems. Using the complete information of the system geometry, it was demonstrated that high image quality in CT reconstructions is possible even in systems with large geometric nonidealities.

Book
01 May 2005
Abstract: Preface Part I: Basic Imaging Principles Overview. Chapter 1Introduction. History of Medical Imaging. Physical Signals. Imaging Modalities. Projection Radiography. Computed Tomography. Nuclear Medicine. Ultrasound Imaging. Magnetic Resonance Imaging. Summary and Key Concepts. Chapter 2: Signals and Systems.Introduction. Signals. Point Impulse. Line Impulse. Comb and Sampling Functions. Rect and Sinc Functions. Exponential and Sinusoidal Signals. Separable Signals. Periodic Signals. Systems. Linear Systems. Impulse Response. Shift Invariance. Connections of LSI Systems. Separable Systems. Stable Systems. The Fourier Transform. Properties of the Fourier Transform. Linearity. Translation. Conjugation and Conjugate Symmetry. Scaling. Rotation. Convolution. Product. Separable Product. Parseval's Theorem. Separability. Transfer Function. Circular Symmetry and the Hankel Transform. Sampling. Sampling Signal Model. Nyquist Sampling Theorem. Anti-aliasing Filters. Summary and Key Concepts. Chapter 3: Image Quality.Introduction. Contrast. Modulation. Modulation Transfer Function. Local Contrast. Resolution. Line Spread Function. Full Width at Half Maximum. Resolution and Modulation Transfer Function. Subsystem Cascade. Resolution Tool. Temporal and Spectral Resolution. Noise. Random Variables. Continuous Random Variables. Discrete Random Variables.Independent Random Variables. Signal-to-Noise Ratio. Amplitude SNR. Power SNR. Differential SNR. Nonrandom Effects. Artifacts. Distortion. Accuracy. Quantitative Accuracy. Diagnostic Accuracy. Summary and Key Concepts. Part II: Radiographic Imaging.Overview. Chapter 4: Physics of Radiography.Introduction. Ionization. Atomic Structure. Electron Binding Energy. Ionization and Excitation. Forms of Ionizing Radiation. Particulate Radiation. Electromagnetic Radiation. Nature and Properties of Ionizing Radiation. Primary Energetic Electron Interactions. Primary Electromagnetic Radiation Interactions. Attenuation of Electromagnetic Radiation. Measures of X-ray Beam Strength. Narrow Beam, Monoenergetic Photons. Narrow Beam, Polyenergetic Photons. Broad Beam Case. Radiation Dosimetry. Exposure. Dose and Kerma. Linear Energy Transfer. The f --factor. Dose Equivalent. Effective Dose. Summary and Key Concepts. Chapter 5: Projection Radiography.Introduction. Instrumentation. X-ray Tubes. Filtration and Restriction. Compensation Filters and Contrast Agents. Grids, Airgaps, and Scanning Slits. Film-Screen Detectors. X-ray Image Intensifiers. Image Formation. Basic Imaging Equation. Geometric Effects. Blurring Effects. Film Characteristics. Noise and Scattering. Signal-to-Noise Ratio. Quantum Efficiency and Detective Quantum Efficiency. Compton Scattering. Summary and Key Concepts. Chapter 6: Computed Tomography.Introduction. CT Instrumentation. CT Generations. X-ray Source and Collimation. CT Detectors. Gantry, Slip Ring, and Patient Table. Image Formation. Line Integrals. CT Numbers. Parallel-Ray Reconstruction. Fan-Beam Reconstruction. Helical CT Reconstruction. Cone Beam CT. Image Quality in CT. Resolution. Noise. Artifacts. Summary and Key Concepts. Part III: Nuclear Medicine Imaging.Overview. Chapter 7: The Physics of Nuclear Medicine.Introduction. Nomenclature. Radioactive Decay. Mass Defect and Binding Energy. Line of Stability. Radioactivity. Radioactive Decay Law. Modes of Decay. Positron Decay and Electron Capture. Isomeric Transition. Statistics of Decay. Radiotracers. Summary and Key Concepts. Chapter 8: Planar Scintigraphy.Introduction. Instrumentation. Collimators. Scintillation Crystal. Photomultiplier Tubes. Positioning Logic. Pulse Height Analyzer. Gating Circuit. Image Capture. Image Formation. Event Position Estimation. Acquisition Modes. Anger Camera Imaging Equation. Image Quality. Resolution. Sensitivity. Uniformity. Energy Resolution. Noise. Factors Affecting Count Rate. Summary and Key Concepts. Chapter 9: Emission Computed Tomography.Instrumentation. SPECT Instrumentation. PET Instrumentation. Image Formation. SPECT Image Formation. PET Image Formation. Iterative Reconstruction. Image Quality in SPECT and PET. Spatial Resolution. Attenuation and Scatter. Random Coincidences. Contrast. Noise and Signal-to-Noise. Summary and Key Concepts. Part IV: Ultrasound Imaging.Overview. Chapter 10: The Physics of Ultrasound. Introduction. The Wave Equation. Three-Dimensional Acoustic Waves. Plane Waves. Spherical Waves. Wave Propagation. Acoustic Energy and Intensity. Reflection and Refraction at Plane Interfaces. Transmission and Reflection Coefficients at Plane Interfaces. Attenuation. Scattering. Doppler Effect. Beam Pattern Formation and Focusing. Simple Field Pattern Model. Diffraction Formulation. Focusing. Summary and Key Concepts. Chapter 11: Ultrasound Imaging Systems.Introduction. Instrumentation. Ultrasound Transducer. Ultrasound Probes. Pulse-Echo Imaging. The Pulse-Echo Equation. Transducer Motion. Ultrasound Imaging Modes. A-Mode Scan. M-Mode Scan. B-Mode Scan. Steering and Focusing. Transmit Steering and Focusing. Beamforming and Dynamic Focusing. Three-Dimensional Ultrasound Imaging. Summary and Key Concepts. Part V: Magnetic Resonance Imaging.Overview. Chapter 12: Physics of Magnetic Resonance.Introduction. Microscopic Magnetization. Macroscopic Magnetization. Precession and Larmor Frequency. Transverse and Longitudinal Magnetization. NMR Signals. Rotating Frame. RF Excitation. Relaxation. The Bloch Equations. Spin Echoes. Contrast Mechanisms. Summary and Key Concepts. Chapter 13: Magnetic Resonance Imaging.Instrumentation. System Components. Magnet. Gradient Coils. Radio-Frequency Coils. Scanning Console and Computer. MRI Data Acquisition. Encoding Spatial Position. Slice Selection. Frequency Encoding. Polar Scanning. Gradient Echoes. Phase Encoding. Spin Echoes. Pulse Repetition Interval. Realistic Pulse Sequences. Image Reconstruction. Rectilinear Data. Polar Data. Imaging Equations. Image Quality. Sampling. Resolution. Noise. Signal-to-Noise Ratio. Artifacts. Summary and Key Concepts. Index.

Journal ArticleDOI
TL;DR: Spiral windmill-type artifacts are effectively suppressed with the z-flying focal spot technique, which has the potential to maintain a low artifact level up to pitch 1.5, in this way increasing the maximum volume coverage speed that can be clinically used.
Abstract: We present a theoretical overview and a performance evaluation of a novel z-sampling technique for multidetector row CT (MDCT), relying on a periodic motion of the focal spot in the longitudinal direction (z-flying focal spot) to double the number of simultaneously acquired slices. The z-flying focal spot technique has been implemented in a recently introduced MDCT scanner. Using 32 x 0.6 mm collimation, this scanner acquires 64 overlapping 0.6 mm slices per rotation in its spiral (helical) mode of operation, with the goal of improved longitudinal resolution and reduction of spiral artifacts. The longitudinal sampling distance at isocenter is 0.3 mm. We discuss in detail the impact of the z-flying focal spot technique on image reconstruction. We present measurements of spiral slice sensitivity profiles (SSPs) and of longitudinal resolution, both in the isocenter and off-center. We evaluate the pitch dependence of the image noise measured in a centered 20 cm water phantom. To investigate spiral image quality we present images of an anthropomorphic thorax phantom and patient scans. The full width at half maximum (FWHM) of the spiral SSPs shows only minor variations as a function of the pitch, measured values differ by less than 0.15 mm from the nominal values 0.6, 0.75, 1, 1.5, and 2 mm. The measured FWHM of the smallest slice ranges between 0.66 and 0.68 mm at isocenter, except for pitch 0.55 (0.72 mm). In a centered z-resolution phantom, bar patterns up to 15 lp/cm can be visualized independent of the pitch, corresponding to 0.33 mm longitudinal resolution. 100 mm off-center, bar patterns up to 14 lp/cm are visible, corresponding to an object size of 0.36 mm that can be resolved in the z direction. Image noise for constant effective mAs is almost independent of the pitch. Measured values show a variation of less than 7% as a function of the pitch, which demonstrates correct utilization of the applied radiation dose at any pitch. The product of image noise and square root of the slice width (FWHM of the respective SSP) is the same constant for all slices except 0.6 mm. For the thinnest slice, relative image noise is increased by 17%. Spiral windmill-type artifacts are effectively suppressed with the z-flying focal spot technique, which has the potential to maintain a low artifact level up to pitch 1.5, in this way increasing the maximum volume coverage speed that can be clinically used.

Proceedings ArticleDOI
08 Dec 2005
TL;DR: The approach lies in constructing a topologically constrained epitome of an image based on a visual attention model that is both comprehensible and size varying, making the method suitable for display-critical applications.
Abstract: We present a non-photorealistic algorithm for retargeting large images to small size displays, particularly on mobile devices. This method adapts large images so that important objects in the image are still recognizable when displayed at a lower target resolution. Existing image manipulation techniques such as cropping works well for images containing a single important object, and down-sampling works well for images containing low frequency information. However, when these techniques are automatically applied to images with multiple objects, the image quality degrades and important information may be lost. Our algorithm addresses the case of multiple important objects in an image. The retargeting algorithm segments an image into regions, identifies important regions, removes them, fills the resulting gaps, resizes the remaining image, and re-inserts the important regions. Our approach lies in constructing a topologically constrained epitome of an image based on a visual attention model that is both comprehensible and size varying, making the method suitable for display-critical applications.

Journal ArticleDOI
TL;DR: In this paper, a regularized Gauss-Newton scheme has been implemented for nonlinear image reconstruction in ECT, where the forward problem has been solved at each iteration using the finite element method and the Jacobian matrix is recalculated using an efficient adjoint field method.
Abstract: Electrical capacitance tomography (ECT) attempts to image the permittivity distribution of an object by measuring the electrical capacitances between sets of electrodes placed around its periphery. Image reconstruction in ECT is a nonlinear and ill-posed inverse problem. Although reconstruction techniques based on a linear approximation are fast, they are not adequate for all cases. In this paper, we study the nonlinearity of the inverse permittivity problem of ECT. A regularized Gauss-Newton scheme has been implemented for nonlinear image reconstruction. The forward problem has been solved at each iteration using the finite element method and the Jacobian matrix is recalculated using an efficient adjoint field method. Regularization techniques are required to overcome the ill-posedness: smooth generalized Tikhonov regularization for the smoothly varying case, and total variation (TV) regularization when there is a sharp transition of the permittivity have been used. The reconstruction results for experimental ECT data demonstrate the advantage of TV regularization for jump changes, and show improvement of the image quality by using nonlinear reconstruction methods.

Journal ArticleDOI
TL;DR: With adjustment of irradiation parameters and an imaging surface dose of less than 0.05 Gy, high quality XVI images can be obtained for a phantom simulating the body thickness, and it is demonstrated that the local tomography technique improves the image contrast and the CNR while reducing the skin dose by 40-50% compared to the wide field technique.

Journal ArticleDOI
TL;DR: A self‐navigated, free‐breathing, whole‐heart 3D coronary MRI technique that would overcome shortcomings and improve the ease‐of‐use of coronary MRI is developed and implemented.
Abstract: Respiratory motion is a major source of artifacts in cardiac magnetic resonance imaging (MRI). Free-breathing techniques with pencil-beam navigators efficiently suppress respiratory motion and minimize the need for patient cooperation. However, the correlation between the measured navigator position and the actual position of the heart may be adversely affected by hysteretic effects, navigator position, and temporal delays between the navigators and the image acquisition. In addition, irregular breathing patterns during navigator-gated scanning may result in low scan efficiency and prolonged scan time. The purpose of this study was to develop and implement a self-navigated, free-breathing, whole-heart 3D coronary MRI technique that would overcome these shortcomings and improve the ease-of-use of coronary MRI. A signal synchronous with respiration was extracted directly from the echoes acquired for imaging, and the motion information was used for retrospective, rigid-body, through-plane motion correction. The images obtained from the self-navigated reconstruction were compared with the results from conventional, prospective, pencil-beam navigator tracking. Image quality was improved in phantom studies using self-navigation, while equivalent results were obtained with both techniques in preliminary in vivo studies.

Journal ArticleDOI
TL;DR: Three-dimensional imaging methods, based on parallaxes as their depth cues, can be classified into the stereoscopic providing binocularParallax only, and multiview providing both binocular and motion parallAXes.
Abstract: Three-dimensional imaging methods, based on parallaxes as their depth cues, can be classified into the stereoscopic providing binocular parallax only, and multiview providing both binocular and motion parallaxes. In these methods, the parallaxes are provided by creating a viewing zone with use of either a special optical eyeglasses or a special optical plate as their viewing zone-forming optics. For the stereoscopic image generations, either the eyeglasses or the optical plate can be employed, but for the multiview the optical plate or the eyeglasses with a tracking device. The stereoscopic image pair and the multiview images are presented either simultaneously or as a time sequence with use of projectors or display panels. For the case of multiview images, they can also be presented as two images at a time according to the viewer's movements. The presence of the viewing zone-forming optics often causes undesirable problems, such as appearance of moire/spl acute/ fringes, image quality deterioration, depth reversion, limiting viewing regions, low image brightness, image blurring, and inconveniences of wearing.

Journal ArticleDOI
TL;DR: In this article, a multi-band wavelet-based image fusion method is presented, which is a further development of the two-band Wavelet transformation, and a set of qualities are classified and analyzed.

Journal ArticleDOI
TL;DR: Parallel imaging is a recently developed family of techniques that take advantage of the spatial information inherent in phased-array radiofrequency coils to reduce acquisition times in magnetic resonance imaging, thereby significantly shortening the acquisition time.
Abstract: Parallel imaging is a recently developed family of techniques that take advantage of the spatial information inherent in phased-array radiofrequency coils to reduce acquisition times in magnetic resonance imaging. In parallel imaging, the number of sampled k-space lines is reduced, often by a factor of two or greater, thereby significantly shortening the acquisition time. Parallel imaging techniques have only recently become commercially available, and the wide range of clinical applications is just beginning to be explored. The potential clinical applications primarily involve reduction in acquisition time, improved spatial resolution, or a combination of the two. Improvements in image quality can be achieved by reducing the echo train lengths of fast spin-echo and single-shot fast spin-echo sequences. Parallel imaging is particularly attractive for cardiac and vascular applications and will likely prove valuable as 3-T body and cardiovascular imaging becomes part of standard clinical practice. Limitations of parallel imaging include reduced signal-to-noise ratio and reconstruction artifacts. It is important to consider these limitations when deciding when to use these techniques. © RSNA, 2005

Patent
28 Dec 2005
TL;DR: In this article, a probabilistic, non-Gaussian, robust image reconstruction method was proposed to enhance the image quality of images or video captured using a low-resolution image capture device.
Abstract: A result higher resolution (HR) image of a scene given multiple, observed lower resolution (LR) images of the scene is computed using a Bayesian estimation image reconstruction methodology. The methodology yields the result HR image based on a Likelihood probability function that implements a model for the formation of LR images in the presence of noise. This noise is modeled by a probabilistic, non-Gaussian, robust function. The image reconstruction methodology may be used to enhance the image quality of images or video captured using a low resolution image capture device. Other embodiments are also described and claimed.

Journal ArticleDOI
TL;DR: The experimental results show that the proposed algorithm yields a watermark that is invisible to human eyes and robust to various image manipulations.
Abstract: In this paper, the authors propose the spread spectrum image watermarking algorithm using the discrete multiwavelet transform. Performance improvement with respect to existing algorithms is obtained by genetic algorithms optimization. In the proposed optimization process, the authors search for parameters that consist of threshold values and the embedding strength to improve the visual quality of watermarked images and the robustness of the watermark. These parameters are varied to find the most suitable for images with different characteristics. The experimental results show that the proposed algorithm yields a watermark that is invisible to human eyes and robust to various image manipulations. The authors also compare their experimental results with the results of previous work using various test images.

Journal ArticleDOI
Martin Spahn1
TL;DR: In angiography and fluoroscopy the transition from image intensifiers to flat detectors is facilitated by ample advantages they offer, such as distortion-free images, excellent coarse contrast, large dynamic range and high X-ray sensitivity.
Abstract: Diagnostic and interventional flat detector X-ray systems are penetrating the market in all application segments. First introduced in radiography and mammography, they have conquered cardiac and general angiography and are getting increasing attention in fluoroscopy. Two flat detector technologies prevail. The dominating method is based on an indirect X-ray conversion process, using cesium iodide scintillators. It offers considerable advantages in radiography, angiography and fluoroscopy. The other method employs a direct converter such as selenium which is particularly suitable for mammography. Both flat detector technologies are based on amorphous silicon active pixel matrices. Flat detectors facilitate the clinical workflow in radiographic rooms, foster improved image quality and provide the potential to reduce dose. This added value is based on their large dynamic range, their high sensitivity to X-rays and the instant availability of the image. Advanced image processing is instrumental in these improvements and expand the range of conventional diagnostic methods. In angiography and fluoroscopy the transition from image intensifiers to flat detectors is facilitated by ample advantages they offer, such as distortion-free images, excellent coarse contrast, large dynamic range and high X-ray sensitivity. These characteristics and their compatibility with strong magnetic fields are the basis for improved diagnostic methods and innovative interventional applications.

Journal ArticleDOI
TL;DR: A review of image reconstruction methods can be found in this paper, where the most reliable reconstruction is the most conservative one, which seeks the simplest underlying image consistent with the input data.
Abstract: ▪ Abstract Digital image reconstruction is a robust means by which the underlying images hidden in blurry and noisy data can be revealed. The main challenge is sensitivity to measurement noise in the input data, which can be magnified strongly, resulting in large artifacts in the reconstructed image. The cure is to restrict the permitted images. This review summarizes image reconstruction methods in current use. Progressively more sophisticated image restrictions have been developed, including (a) filtering the input data, (b) regularization by global penalty functions, and (c) spatially adaptive methods that impose a variable degree of restriction across the image. The most reliable reconstruction is the most conservative one, which seeks the simplest underlying image consistent with the input data. Simplicity is context-dependent, but for most imaging applications, the simplest reconstructed image is the smoothest one. Imposing the maximum, spatially adaptive smoothing permitted by the data results in t...

Journal ArticleDOI
TL;DR: This paper investigates the possibility of increasing the frame rate in ultrasound imaging by using modulated excitation signals, and shows that Hadamard spatial encoding in transmit with FM emission signals can be used to increase the frame rates by 12 to 25 times with either a slight or no reduction in signal-to-noise ratio and image quality.
Abstract: For pt.II, see ibid., vol.52, no.2, p.192-207 (2005). This paper, the last from a series of three papers on the application of coded excitation signals in medical ultrasound, investigates the possibility of increasing the frame rate in ultrasound imaging by using modulated excitation signals. Linear array-coded imaging and sparse synthetic transmit aperture imaging are considered, and the trade-offs between frame rate, image quality, and SNR are discussed. It is shown that FM codes can be used to increase the frame rate by a factor of two without a degradation in image quality and by a factor of 5, if a slight decrease in image quality can be accepted. The use of synthetic transmit aperture imaging is also considered, and it is here shown that Hadamard spatial encoding in transmit with FM emission signals can be used to increase the frame rate by 12 to 25 times with either a slight or no reduction in signal-to-noise ratio and image quality. By using these techniques, a complete ultrasound-phased array image can be created using only two emissions.

Proceedings Article
01 Jan 2005
TL;DR: Wang et al. as mentioned in this paper proposed Generalized 2D Principal Component Analysis (G2DPCA) to solve the curse of dimensionality dilemma and small sample size problem in image representation, recognition and retrieval.
Abstract: In the tasks of image representation, recognition and retrieval, a 2D image is usually transformed into a ID long vector and modelled as a point in a high-dimensional vector space. This vector-space model brings up much convenience and many advantages. However, it also leads to some problems such as the Curse of Dimensionality dilemma and Small Sample Size problem, and thus produces us a series of challenges, for example, how to deal with the problem of numerical instability in image recognition, how to improve the accuracy and meantime to lower down the computational complexity and storage requirement in image retrieval, and how to enhance the image quality and meanwhile to reduce the transmission time in image transmission, etc. In this paper, these problems are solved, to some extent, by the proposed Generalized 2D Principal Component Analysis (G2DPCA). G2DPCA overcomes the limitations of the recently proposed 2DPCA (Yang et al., 2004) from the following aspects: (1) the essence of 2DPCA is clarified and the theoretical proof why 2DPCA is better than Principal Component Analysis (PCA) is given; (2) 2DPCA often needs much more coefficients than PCA in representing an image. In this work, a Bilateral-projection-based 2DPCA (B2DPCA) is proposed to remedy this drawback; (3) a Kernel-based 2DPCA (K2DPCA) scheme is developed and the relationship between K2DPCA and KPCA (Scholkopf et al., 1998) is explored. Experimental results in face image representation and recognition show the excellent performance of G2DPCA.

Journal Article
TL;DR: Using parameters from a time-resolved parallel 3D MRA sequence without view sharing, giving a 33% increase in spatial resolution, simulations have revealed that k-space segmentation in 3 regions provides acceptable artifacts.
Abstract: RATIONALE AND OBJECTIVES In view sharing, some parts of k-space are updated more often than others, leading to an effective shortening of the total acquisition time. Undersampling of high-frequency k-space data, however, can result in artifacts at the edges of blood vessels, especially during the rapid signal intensity changes. The objective of this study was to evaluate a new time-resolved echo-shared angiographic technique (TREAT) combining parallel imaging with view sharing. First, the presence of artifacts arising from different temporal interpolation schemes was evaluated in simulations of the point spread function. Second, the image quality and presence of artifacts of time-resolved parallel three-dimensional magnetic resonance angiography (3D MRA) of the chest, acquired with and without view sharing, was assessed in a clinical study of patients with cardiovascular or pulmonary disease. MATERIALS AND METHODS Using parameters from a time-resolved parallel 3D MRA sequence without view sharing (parallel MRA), giving a 33% increase in spatial resolution, our simulations have revealed that k-space segmentation in 3 regions provides acceptable artifacts. Thirty-six consecutive patients (mean age, 50 +/- 16 years; 15 females, 22 males) were examined in a clinical study with TREAT. The image data were compared with that of a group of 31 consecutive patients (mean age, 46 +/- 19 years; 12 females, 19 males) examined with a conventional time-resolved parallel MRA sequence without view sharing (parallel MRA). The image quality and presence of artifacts was assessed in a blind comparison by 2 radiologists in consensus using MPR and MIP reconstructions. Furthermore, the peak SNR of the pulmonary artery and aorta was compared between both MRA sequences. RESULTS The image quality of TREAT was rated significantly higher than that of the parallel MRA sequence without view sharing: depending on the orientation of MPR and MIP reconstructions, an excellent image quality was found in 69-89% with TREAT and in 45-71% with the parallel MRA protocol without view sharing, respectively. The presence of artifacts was equal with both sequences. CONCLUSION View sharing can be successfully combined with other acceleration techniques, such as parallel imaging. TREAT allows the assessment of the thoracic vasculature with a high temporal and spatial resolution.