scispace - formally typeset
Search or ask a question

Showing papers on "Image quality published in 1996"


Journal ArticleDOI
TL;DR: An MR angiographic technique, referred to as 3D TRICKS (3D time‐resolved imaging of contrast kinetics) has been developed, which combines and extends to 3D imaging several previously published elements, allowing reconstruction of a series of 3D image sets having an effective temporal frame rate of one volume every 2‐6 s.
Abstract: An MR angiographic technique, referred to as 3D TRICKS (3D time-resolved imaging of contrast kinetics) has been developed. This technique combines and extends to 3D imaging several previously published elements. These elements include an increased sampling rate for lower spatial frequencies, temporal interpolation of k-space views, and zero-filling in the slice-encoding dimension. When appropriately combined, these elements permit reconstruction of a series of 3D image sets having an effective temporal frame rate of one volume every 2-6 s. Acquiring a temporal series of images offers advantages over the current contrast-enhanced 3D MRA techniques in that it I) increases the likelihood that an arterial-only 3D image set will be obtained. II) permits the passage of the contrast agent to be observed, and III) allows temporal-processing techniques to be applied to yield additional information, or improve image quality.

908 citations


Proceedings ArticleDOI
01 Aug 1996
TL;DR: An algorithm for real-time level of detail reduction and display of high-complexity polygonal surface data that has been implemented for approximating and rendering digital terrain models and other height fields, and consistently performs at interactive frame rates with high image quality.
Abstract: We present an algorithm for real-time level of detail reduction and display of high-complexity polygonal surface data. The algorithm uses a compact and efficient regular grid representation, and employs a variable screen-space threshold to bound the maximum error of the projected image. A coarse level of simplification is performed to select discrete levels of detail for blocks of the surface mesh, followed by further simplification through repolygonalization in which individual mesh vertices are considered for removal. These steps compute and generate the appropriate level of detail dynamically in real-time, minimizing the number of rendered polygons and allowing for smooth changes in resolution across areas of the surface. The algorithm has been implemented for approximating and rendering digital terrain models and other height fields, and consistently performs at interactive frame rates with high image quality.

714 citations


Book
01 Mar 1996
TL;DR: Radiometry and photoetry solid state arrays arrya performance camera performance CRT-based displays sampling theory linear system theory system MFT image quality minimum resolvable contrast as discussed by the authors.
Abstract: Radiometry and photoetry solid state arrays arrya performance camera performance CRT-based displays sampling theory linear system theory system MFT image quality minimum resolvable contrast.

516 citations


Journal ArticleDOI
Wei Ding1, Bede Liu1
TL;DR: A feedback re-encoding method with a rate-quantization model, which can be adapted to changes in picture activities, is developed and used for quantization parameter selection at the frame and slice level.
Abstract: For MPEG video coding and recording applications, it is important to select the quantization parameters at slice and macroblock levels to produce consistent quality image for a given bit budget. A well-designed rate control strategy can improve the overall image quality for video transmission over a constant-bit-rate channel and fulfil the editing requirement of video recording, where a certain number of new pictures are encoded to replace consecutive frames on the storage media using, at most, the same number of bits. We developed a feedback re-encoding method with a rate-quantization model, which can be adapted to changes in picture activities. The model is used for quantization parameter selection at the frame and slice level. The extra computations needed are modest. Experiments show the accuracy of the model and the effectiveness of the proposed rate control method. A new bit allocation algorithm is then proposed for MPEG video coding.

377 citations


Proceedings ArticleDOI
02 Nov 1996
TL;DR: In this paper, the authors proposed a new method which provides better modeling of the SRF for Tl-201 SPECT, and should provide improved accuracy for non-uniform attenuators.
Abstract: Scatter compensation using iterative reconstruction results in improved image quality and quantitative accuracy compared to subtraction-based methods, However, this requires knowledge of the spatially-varying, object-dependent scatter response function (SRF), We have previously developed a method, slab derived scatter estimation (SDSE) for estimating the SRF. However, this method has reduced accuracy for nonuniform attenuators and Tl-201 imaging. In this paper we present a new method which provides better modeling of the SRF for Tl-201 SPECT, and should provide improved accuracy for nonuniform attenuators. The method requires 3 image space convolutions and an attenuated projection for each viewing angle. Implementation in a projector-backprojector pair for use with an iterative reconstruction algorithm would require 2 image space Fourier transforms and 6 image space inverse Fourier transforms per iteration. We observed good agreement between SRFs and projection data estimated using this new model compared to those obtained using Monte Carlo simulations.

219 citations




Patent
10 Dec 1996
TL;DR: In this paper, a method and apparatus for imaging and identifying concealed objects within an obscuring medium using radiation (optical, photo-acoustic, ionizing, and/or acoustic) optimized for imaging (e.g. temporal properties, spectral bandwidth, directionality, polarization etc.).
Abstract: A method and apparatus are provided for imaging and identifying concealed objects within an obscuring medium using radiation (optical, photo-acoustic, ionizing, and/or acoustic) optimized for imaging (e.g. temporal properties, spectral bandwidth, directionality, polarization, etc.). Radiation propagates through, interacts with, exits the medium and the object, and is detected/imaged. Image quality can be improved if radiation is collimated and/or if transmission and/or backscattered measurements from a number of perspectives are used to improved image reconstruction. Coupling materials can be employed during image acquisition to enhance radiation coupling as well as providing desirable absorption and scattering properties. Contrast materials and agents can also aid in the detection of concealed objects. Adaptive methods, e.g. using reference objects, including implementations based on the concept of guide stars, can improve the imaging process. The surface can be monitored and groomed to enhance the imaging process. Tomosynthesis techniques can be used to reconstruct images. Acousto-optic effects may be observed when both optical radiation and acoustic radiation are introduce into the medium. A laser vibrometry, speckle, or holographic interferometry imaging technique can be used to readout the acoustic waveform exiting the medium surface directly or after interacting with a deformable mirrored or reflective layer coupled to the medium. The medium may be prepared prior to imaging in order to reduce surface irregularities and roughness. Multilayer mirrors and capillary optics can be used to enhance imaging systems which use ionizing radiation. Resistance images can be obtained using probes to penetrate the medium.

191 citations


Journal ArticleDOI
TL;DR: The quantitative evaluation of imaging performance has revealed potential advantages in a two-tiered receiver antenna configuration whose measured field values are more sensitive to target region changes than the typical tomographic type of approach which uses reception sites around the full target region perimeter.
Abstract: A prototype microwave imaging system is evaluated for its ability to recover two-dimensional (2-D) electrical property distributions under transverse magnetic (TM) illumination using multitarget tissue equivalent phantoms. Experiments conducted in a surrounding lossy saline tank, demonstrate that simultaneous recovery of both the real and imaginary components of the electrical property distribution is possible using absolute imaging procedures over a frequency range of 300-700 MHz. Further, image reconstructions of embedded tissue-equivalent targets are found to be quantitative not only with respect to geometrical factors such as object size and location but also electrical composition. Quantitative assessments based on full-width half-height criteria reveal that errors in diameter estimates of reconstructed targets are less than 10 mm in all cases, whereas, positioning errors are less than 1 mm in single object experiments but degrade to 4-10 mm when multiple targets are present. Recovery of actual electrical properties is found to be frequency dependent for the real and imaginary components with background values being typically within 10-20% of their correct size and embedded object having similar accuracies as a percentage of the electrical contrast, although errors as high as 50% can occur. The quantitative evaluation of imaging performance has revealed potential advantages in a two-tiered receiver antenna configuration whose measured field values are more sensitive to target region changes than the typical tomographic type of approach which uses reception sites around the full target region perimeter. This measurement strategy has important implications for both the image reconstruction algorithm where there is a premium on minimizing problem size without sacrificing image quality and the hardware system design which seeks to economize on the amount of measured data required for quantitative image reconstruction while maximizing its sensitivity to target perturbations.

151 citations


Patent
14 Jun 1996
TL;DR: In this paper, a 3D image data set representing a volume of material such as human tissue is created using speckle decorrelation techniques to process successive 2D data slices from a moving, standard 1D or 1.5D ultrasound transducer.
Abstract: A 3D image data set representing a volume of material such as human tissue is created using speckle decorrelation techniques to process successive 2D data slices from a moving, standard 1D or 1.5D ultrasound transducer. This permits the use of standard ultrasound machinery, without the use of additional slice-position hardware, to create 3D images without having to modify the machinery or its operation. Similar techniques can be used for special data processing within the imaging system as well to expedite the image acquisition process. Optionally, the image quality of 2D images can be enhanced through the use of multiple 3D data sets derived using the method.

129 citations


Journal ArticleDOI
TL;DR: A total-variation-minimization-based iterative algorithm is described in this paper that enhances the quality of reconstructed images with frequency-domain data over that obtained previously with a regularized least-squares approach.
Abstract: Optical image reconstruction in heterogeneous turbid media is sensitive to noise, especially when the signal-to-noise ratio of a measurement system is low. A total-variation-minimization-based iterative algorithm is described in this paper that enhances the quality of reconstructed images with frequency-domain data over that obtained previously with a regularized least-squares approach. Simulation experiments in an 8.6-cm-diameter circular heterogeneous region with low- and high-contrast levels between the target and the background show that the quality of the reconstructed images can be improved considerably when total-variation minimization is included. These simulated results are further verified and confirmed by images reconstructed from experimental data by the use of the same geometry and optically tissue-equivalent phantoms. Measures of imaging performance, including the location, size, and shape of the reconstructed heterogeneity, along with absolute errors in the predicted optical-property values are used to quantify the enhancements afforded by this new approach to optical image reconstruction with diffuse light. The results show improvements of up to 5 mm in terms of geometric information and an order of magnitude or more decrease in the absolute errors in the reconstructed optical-property values for the test cases examined.

Patent
26 Nov 1996
TL;DR: In this article, an image on a first copy sheet is scanned and compared to a reference image to calibrate the imaging machine, which is automatically initiated via control data stored in a memory.
Abstract: An imaging machine operating components include an input scanner for providing images on copy sheets and a copy sheet path connected to the input scanner. The imaging machine is calibrated by providing an image on a first copy sheet and automatically conveying the first copy sheet to the input scanner by way of the copy path. The image on the first copy sheet is scanned and provides the image on a second copy sheet. The image on the second copy sheet is sensed and compared to a reference image to calibrate the imaging machine. The calibration sequence is automatically initiated via control data stored in a memory.

Journal ArticleDOI
TL;DR: It is shown that the fact that the burst cycle period is in general not an integer multiple of the sampling grid distance does not complicate the algorithm, and an image example using X-SAR data for simulation of a burst system is presented.
Abstract: Processing ScanSAR or burst-mode SAR data by standard high precision algorithms (e.g., range/Doppler, wavenumber domain, or chirp scaling) is shown to be an interesting alternative to the normally used SPECAN (or deramp) algorithm. Long burst trains with zeroes inserted into the interburst intervals can be processed coherently. This kind of processing preserves the phase information of the data-an important aspect for ScanSAR interferometry. Due to the interference of the burst images the impulse response shows a periodic modulation that can be eliminated by a subsequent low-pass filtering of the detected image. This strategy allows an easy and safe adaptation of existing SAR processors to ScanSAR data if throughput is not an issue. The images are automatically consistent with regular SAR mode images both with respect to geometry and radiometry. The amount and diversity of the software for a multimode SAR processor are reduced. The impulse response and transfer functions of a burst-mode end-to-end system are derived. Special attention is drawn to the achievable image quality, the radiometric accuracy, and the effective number of looks. The scalloping effect known from burst-mode systems can be controlled by the spectral weighting of the processor transfer function. It is shown that the fact that the burst cycle period is in general not an integer multiple of the sampling grid distance does not complicate the algorithm. An image example using X-SAR data for simulation of a burst system is presented.

Journal ArticleDOI
TL;DR: A detailed comparison between MLS and the other two conventional projection access orderings in ART: the random permutation scheme (RPS) and the sequential access scheme (SAS) shows that one-iteration MLS produces the best reconstruction in many situations.
Abstract: In a previous report we presented a novel ART technique with the projections arranged and accessed in a multilevel scheme (MLS) for efficient algebraic image reconstruction, but whether the scheme is still superior in real situations where the data are noisy is unknown. In this paper, we make a detailed comparison between MLS and the other two conventional projection access orderings in ART: the random permutation scheme (RPS) and the sequential access scheme (SAS). By simulating reconstructions of a human head using different sizes of detector, taking different numbers of projections, each measurement under a different number of photons, a full mapping of the reconstruction accuracy measured by correlation coefficient for the three schemes has been made. Test results demonstrate that one-iteration MLS produces the best reconstruction in many situations. It outperforms one-iteration RPS when the noise level is low. SAS in many cases can never attain the image quality of one-iteration MLS, even with many more iterations. A convergence test using different initial guesses also demonstrates that MLS has less initial dependence. In the Fourier domain, it also represents an efficient and fast implementation of the Fourier slice theorem.

Proceedings ArticleDOI
16 Sep 1996
TL;DR: This work investigated the relationship between the perceived image fidelity and image quality of halftone textures by asking subjects to rank order a set of printed halftones on the basis of smoothness and reducing the contrast of each pattern until it was at threshold.
Abstract: Image fidelity (inferred by the ability to discriminate between two images) and image quality (inferred by the preference for one image over another) are often assumed to be directly related. We investigated the relationship between the perceived image fidelity and image quality of halftone textures. Subjects were asked to rank order a set of printed halftone swatches on the basis of smoothness. They were then asked to reduce the contrast of each pattern until it was at threshold, thus providing an estimate of the pattern's perceptual strength and its discriminability from a non-textured swatch. We found only a moderate correlation between image fidelity and image quality.

Proceedings ArticleDOI
TL;DR: A new, fast algorithm for synthetic aperture radar (SAR) image formation is introduced, based on a decomposition of the time domain backprojection technique, which results in a quadtree data structure that is readily parallelizable and requires only limited interprocessor communications.
Abstract: A new, fast algorithm for synthetic aperture radar (SAR) image formation is introduced. The algorithm is based on a decomposition of the time domain backprojection technique. It inherits the primary advantages of time domain backprojection: simple motion compensation, simple and spatially unconstrained propagation velocity compensation, and localized processing artifacts. The computational savings are achieved by using a divide-and-conquer strategy of decomposition, and exploiting spatial redundancy in the resulting sub-problems. The decomposition results in a quadtree data structure that is readily parallelizable and requires only limited interprocessor communications. For a SAR with N aperture points and an N by N image area, the algorithm is seen to achieve O(N2logN) complexity. The algorithm allows a direct trade between processing speed and focused image quality.

Journal ArticleDOI
TL;DR: Higher image quality was achieved over a much wider exposure range with the storage phosphor system than with either film or the CCD systems.
Abstract: OBJECTIVE To compare film for intra-oral radiography with two charge-coupled device (CCD) and one storage phosphor system for digital imaging in respect of subjective image quality, detectability of small mass differences and appearance of burn-out effects and blooming phenomena at various exposure times. METHODS Dried mandibles with teeth from different areas were radiographed at exposures covering a relative range from 1 to 100. Image quality was subjectively evaluated after image processing, when applicable, using a visual grading scale from 0 to 10. The number of visible holes in an aluminium block was used to measure the detectability of small mass differences. Burn-out effects and blooming were evaluated by measuring widths of roots and of aluminium and plastic cylinders. RESULTS Radiographs with the storage phosphor system achieved image quality scores similar to those of film but over a larger exposure range, while CCD images were rated lower and over a smaller range. All holes in the aluminium bl...

Journal ArticleDOI
Ping Wah Wong1
TL;DR: Based on the adaptive error-diffusion algorithm, a method for constructing a halftone image that can be rendered at multiple resolutions that is suitable for progressive transmission, and for cases where rendition at several resolutions is required.
Abstract: Error diffusion is a procedure for generating high quality bilevel images from continuous-tone images so that both the continuous and halftone images appear similar when observed from a distance. It is well known that certain objectionable patterning artifacts can occur in error-diffused images. Here, we consider a method for adjusting the error-diffusion filter concurrently with the error-diffusion process so that an error criterion is minimized. The minimization is performed using the least mean squares (LMS) algorithm in adaptive signal processing. Using both raster and serpentine scanning, we show that such an algorithm produces better halftone image quality compared to traditional error diffusion with a fixed filter. Based on the adaptive error-diffusion algorithm, we propose a method for constructing a halftone image that can be rendered at multiple resolutions. Specifically, the method generates a halftone from a continuous tone image such that if the halftone is down-sampled, a binary image would result that is also a high quality rendition of the continuous-tone image at a reduced resolution. Such a halftone image is suitable for progressive transmission, and for cases where rendition at several resolutions is required. Cases for noninteger scaling factors are also considered.

Journal ArticleDOI
TL;DR: Both qualitative and quantitative simulation results clearly show the superiority of the new adaptive algorithm for image interpolation with edge enhancement over standard low-pass interpolation algorithms such as bilinear, diamond-filter, or B-spline interpolation.
Abstract: A new adaptive algorithm for image interpolation with percep- tual edge enhancement is proposed. Here, perceptual means that edges are enhanced and interpolated in a visually pleasing way. Each pixel neighborhood is classified into one of three categories (constant, ori- ented, or irregular). In each case, an optimal interpolation technique finds the missing pixels without generating unpleasant artifacts such as aliasing or ringing effects. Furthermore, a quadratic Volterra filter is em- ployed to extract perceptually important details from the original image, which are then used to improve the overall sharpness and contrast. Both qualitative and quantitative simulation results clearly show the superiority of our method over standard low-pass interpolation algorithms such as bilinear, diamond-filter, or B-spline interpolation. © 1996 Society of Photo-Optical Instrumentation Engineers. Subject terms: image interpolation; zooming; quadratic Volterra filters; detail en- hancement; image enhancement; edge enhancement.

Journal ArticleDOI
TL;DR: Three correction methods applicable for correction of specific classes of motion in MRI, including a generalized projection onto convex sets (GPOCS) postprocessing algorithm, and a radial‐scan method that proves to be more robust when more complex motions are present.
Abstract: Motion continues to be a significant problem in MRI, producing image artifacts that can severely degrade image quality. In diffusion-weighted imaging (DWI), the problem is amplified by the presence of large gradient fields used to produce the diffusion weighting. Three correction methods applicable for correction of specific classes of motion are described and compared. The first is based on a generalised projection onto convex sets (GPOCS) postprocessing algorithm. The second technique uses the collection of navigator echoes to track phase errors. The third technique is based on a radial-scan data acquisition combined with a modified projection-reconstruction algorithm. Although each technique corrects well for translations, the radial-scan method proves to be more robust when more complex motions are present. A detailed description of the causes of MR data errors caused by rigid body motion is included as an appendix.

Journal ArticleDOI
TL;DR: Measurements over a period of two years show that the QC test provides a sensitive indication of imaging performance, which results in warning messages to the operator indicating potential problems in system performance.
Abstract: A quality control (QC) test suitable for routine daily use has been developed for video based electronic portal imaging devices. It provides an objective and quantitative test for acceptable image quality on the basis of the high contrast spatial resolution and the contrast-to-noise ratio (CNR). The test uses a phantom consisting of five sets of high-contrast rectangular bar patterns with spatial frequencies of 0.1, 0.2, 0.25, 0.4, and 0.75 lp/mm. Data obtained during a one month calibration period were used to determine a critical frequency fc for the relative square wave modulation transfer function and a critical contrast-to-noise ratio (CNRc). Subsequent measurements indicating significant deviations from these critical values result in warning messages to the operator indicating potential problems in system performance. Measurements over a period of two years show that the QC test provides a sensitive indication of imaging performance.

Proceedings ArticleDOI
16 Sep 1996
TL;DR: An algorithm to design a frequency modulated screen using the direct binary search algorithm is described and it is shown that it can maintain halftone image quality while significantly reducing the required computation.
Abstract: We describe an algorithm to design a frequency modulated screen using the direct binary search algorithm. Compared with the direct binary search algorithm itself, we show that we can maintain halftone image quality while significantly reducing the required computation.

Patent
24 Jun 1996
TL;DR: In this paper, a modified zero-tree coding is used for image data compression, where a vector quantizer codes the remaining values and lossless coding is performed on the results of the two coding steps.
Abstract: An apparatus and method for image data compression performs a modified zero-tree coding on a range of image bit plane values from the largest to a defined smaller value, and a vector quantizer codes the remaining values and lossless coding is performed on the results of the two coding steps. The defined smaller value can be adjusted iteratively to meet a preselected compressed image size criterion or to meet a predefined level of image quality, as determined by any suitable metric. If the image to be compressed is in RGB color space, the apparatus converts the RGB image to a less redundant color space before commencing further processing.

Journal ArticleDOI
TL;DR: The connection between transducer structure and image blur is described and the combination of long crystal design and image processing results in substantially improved image contrast and spatial resolution relative to conventional AOTF imaging devices.
Abstract: Image blur in acousto-optic tunable filters (AOTF’s) has been a persistent problem. Here we describe the connection between transducer structure and image blur and experimentally measure it by using a 5-cm 12°-cut TeO2 crystal of our design. With these quantitative results, we develop an image-processing method that minimizes AOTF-related image degradation. The combination of long crystal design and image processing results in substantially improved image contrast and spatial resolution relative to conventional AOTF imaging devices. We present high-magnification images of fluorescent actin fibers in cells in which we obtain a resolution of approximately 0.35 μm, representing the first successful use of an AOTF for ultra-high-resolution microscopy. Further improvements are also predicted.

Journal ArticleDOI
TL;DR: Two robust methods for T1-and T2-weighted snapshot imaging of the heart with data acquisition within a single heart beat and suppression of blood signal suppression constitute suitable methods for fast and high-quality cardiac magnetic resonance imaging (MRI).
Abstract: Purpose: To implement and evaluate two robust methods for T1-and T2-weighted snapshot imaging of the heart with data acquisition within a single heart beat and suppression of blood signal. Methods: Both Tl-and T2-weighted diastolic images of the heart can be obtained with half Fourier single-shot turbo spin echo (HASTE) and turbo fast low-angle shot (turboFLASH) sequences, respectively, in less than 350 ms. Signal from flowing blood in the ventricles and large vessels can be suppressed by a preceding inversion recovery preparing pulse pair (PRESTO). Fifteen volunteers and five patients have been evaluated quantitatively for signal-to-noise ratio (SNR) contrast-to-noise ratio (CNR) and flow void and qualitatively for image quality, artifacts, and black-blood effect. Results: Both PRESTO-HASTE and PRESTO-turboFLASH achieved consistently good image quality and blood signal suppression. In contrast to gradient-echo (GRE) echo-planar imaging techniques, (EPI) HASTE and turboFLASH are much less sensitive to local susceptibility differences in the thorax, resulting in a more robust imaging technique without the need for time-consuming system tuning. Compared to standard spin-echo sequences with cardiac triggering, HASTE and turboFLASH have significantly shorter image acquisition times and are not vulnerable to respiratory motion artifacts. Conclusion: PRESTO-HASTE and PRESTO-turboFLASH constitute suitable methods for fast and high-quality cardiac magnetic resonance imaging (MRI).

Journal ArticleDOI
TL;DR: The results of the evaluation indicate that the PSPL system may be capable of operating at exposures up to 80% lower than for film with comparable image quality.
Abstract: Direct digital imaging systems are becoming increasingly common in dental radiography. The majority of these systems are based on charge coupled device (CCD) technology. A relatively new system, based on photo-stimulable phosphor luminescence (PSPL), is now commercially available. This system appears to overcome some of the restrictions of CCD systems including those associated with the bulky detector, connecting wire, limited image size and limited exposure latitude. A physical evaluation of the PSPL system was conducted including measurements of sensitometric response, modulation transfer function (MTF), noise power spectrum (NPS), noise equivalent quanta (NEQ) and detective quantum efficiency (DQE). These measurements were compared with results obtained previously for Kodak Ektaspeed (E-speed) film. The results of the evaluation indicate that the PSPL system may be capable of operating at exposures up to 80% lower than for film with comparable image quality. Digital methods of obtaining radiographic images are now widely available in general radiology. The advantages of these new methods over film based systems include faster examination times, digital image archive/ communication, image processing and, in some cases, lower exposures. One such direct digital radiography (DDR) system, known as computed radiography (CR) [1] has recently gained acceptance in a wide range of radiographic examinations. These systems are referred to as "direct digital" because they do not employ an intermediate, non-digital, imaging stage in generating the digital data. CR utilizes image plates incorporating a photo-stimulable storage phosphor as the X-ray image receptor. The image plates both resemble and are used in a similar manner to, conventional radiographic screens. However, unlike conventional radiographic screens which promptly produce luminescence on exposure to X-rays, a proportion of the X-ray energy is stored in the storage phosphor, forming a latent image. The stored image data are then retrieved by scanning the image plate with an external energy source, in this case a laser, which stimulates release of the stored energy from the phosphor in the form of luminescence. Emitted luminescence is detected by a photomultiplier tube and the signal is converted to a digital image by an analogue to digital converter. The image plates can be used with existing radiographic equipment using the same radio

Journal ArticleDOI
TL;DR: The cardiac fluoroscopy technique provides an approximate eightfold reduction in the time required to obtain subject‐specific double oblique sections and an independent graphical user interface facilitates interactive control of section localization and contrast by permitting pulse sequence parameter modification during scanning.
Abstract: A technique is described for high speed interactive imaging of the heart with either white or black blood contrast. Thirty-two views of a segmented, magnetization-prepared gradient echo sequence are acquired during diastole. Using three-quarter partial Fourier sampling, data for a complete 128 x 128 image are acquired in three cardiac cycles. High speed reconstruction provides an image update of each cardiac cycle 159 ms after measurement. An independent graphical user interface facilitates interactive control of section localization and contrast by permitting pulse sequence parameter modification during scanning. The efficiency and image quality of the cardiac MR fluoroscopy technique were evaluated in 11 subjects. Compared with the conventional graphic prescription method, the cardiac fluoroscopy technique provides an approximate eightfold reduction in the time required to obtain subject-specific double oblique sections. Image quality for these scout acquisitions performed during free breathing was sufficient to identify small cardiac structures.

Journal ArticleDOI
TL;DR: This paper describes the complete hybrid compression system with emphasis on the texture modeling issues and obtains natural spectral texture models and avoid the boundary blending problems usually associated with polygonal modeling.
Abstract: High-quality image compression algorithms are capable of achieving transmission or storage rates of 03 to 05 b/pixel with low degradation in image quality In order to obtain even lower bit rates, we relax the usual RMS error definition of image quality and allow certain "less critical" portions of the image to be transmitted as texture models These regions are then reconstructed at the receiver with statistical fidelity in the mid- to high-range spatial frequencies and absolute fidelity in the lowpass frequency range This hybrid spectral texture modeling technique takes place in the discrete wavelet transform domain In this way, we obtain natural spectral texture models and avoid the boundary blending problems usually associated with polygonal modeling This paper describes the complete hybrid compression system with emphasis on the texture modeling issues

Journal ArticleDOI
TL;DR: Various practical effects including those of the annular pupils and the size of the confocal pinhole as well as of the numerical aperture of objectives on the image quality are examined xperimentally.
Abstract: We report on confocal scanning imaging through highly scattering media. Various practical effects including those of the annular pupils and the size of the confocal pinhole as well as of the numerical aperture of objectives on the image quality are examined experimentally. The combination of an annular objective with a finite-sized detector may prove advantageous for improving image quality.

Journal ArticleDOI
TL;DR: A psychometric space in which the positions of the images are determined by objective measures on the images, and it is shown that the attributes and the quality thus estimated correlate well with the perceived attributes and quality.
Abstract: Reliable and economic methods for assessing image quality are essential for designing better imaging systems. Although reliable psychophysical methods are available for assessing perceptual image quality with the help of human subjects, the cost of performing such experiments prevents their use for evaluating large amounts of image material. This has led to an increasing demand for objective methods for estimating image quality. The perceived quality of an image is usually determined by several underlying perceptual attributes such as sharpness and noisiness. In the accompanying paper [ J. Opt. Soc. Am. A13, 1166– 1177 ( 1996)] it is demonstrated that the relationships between images on the one hand and judgments on attributes and overall quality by subjects on the other hand can be characterized in a multidimensional perceptual space. In this perceptual space the images are represented by points, and the strengths of their perceptual attributes are modeled by the projections of these image positions onto the attribute axes. In analogy with the perceptual space we will introduce a psychometric space in which the positions of the images are determined by objective measures on the images. In the case of images degraded by blur and noise the stimulus coordinates are functions of the estimated spread of the blurring kernel and the estimated standard deviation of the noise, respectively. According to the model presented in this paper, the perceptual attributes of images can be estimated in three steps. In the first step the physical parameters (blur spread and noise standard deviation) are estimated from the images. In the second step these estimates are used to position the images in psychometric space. In the third step the attribute strengths are derived by projecting the latter image positions onto the attribute axes. We show that the attributes and the quality thus estimated correlate well with the perceived attributes and quality.