scispace - formally typeset
Search or ask a question

Showing papers on "Image quality published in 1998"


Journal ArticleDOI
23 Apr 1998-Nature
TL;DR: In this paper, the authors report a solution to this problem for a medium-voltage electron microscope which gives a stunning enhancement of image quality, which can be used to improve the resolution of the electron microscope.
Abstract: One of the biggest obstacles in improving the resolution of the electron microscope has always been the blurring of the image caused by lens aberrations. Here we report a solution to this problem for a medium-voltage electron microscope which gives a stunning enhancement of image quality.

948 citations


Proceedings ArticleDOI
04 Oct 1998
TL;DR: Experimental results show that spatially adaptive wavelet thresholding yields significantly superior image quality and lower MSE than optimal uniform thresholding.
Abstract: The method of wavelet thresholding for removing noise, or denoising, has been researched extensively due to its effectiveness and simplicity. Much of the work has been concentrated on finding the best uniform threshold or best basis. However, not much has been done to make this method adaptive to spatially changing statistics which is typical of a large class of images. This work proposes a spatially adaptive wavelet thresholding method based on context modeling, a common technique used in image compression to adapt the coder to the non-stationarity of images. We model each coefficient as a random variable with the generalized Gaussian prior with unknown parameters. Context modeling is used to estimate the parameters for each coefficient, which are then used to adapt the thresholding strategy. Experimental results show that spatially adaptive wavelet thresholding yields significantly superior image quality and lower MSE than optimal uniform thresholding.

635 citations


Journal ArticleDOI
TL;DR: An optical coherence tomography system is described which can image up to video rate and features a high speed scanning delay line in the reference arm based on Fourier-transform pulse shaping technology.
Abstract: An optical coherence tomography system is described which can image up to video rate. The system utilizes a high power broadband source and real time image acquisition hardware and features a high speed scanning delay line in the reference arm based on Fourier-transform pulse shaping technology. The theory of low coherence interferometry with a dispersive delay line, and the operation of the delay line are detailed and the design equations of the system are presented. Real time imaging is demonstrated in vivo in tissues relevant to early human disease diagnosis (skin, eye) and in an important model in developmental biology (Xenopus laevis).

624 citations


Proceedings ArticleDOI
24 Jul 1998
TL;DR: The model is based on a multiscale representation of pattern, luminance, and color processing in the human visual system and can be usefully applied to image quality metrics, image compression methods, and perceptually-based image synthesis algorithms.
Abstract: In this paper we develop a computational model of adaptation and spatial vision for realistic tone reproduction. The model is based on a multiscale representation of pattern, luminance, and color processing in the human visual system. We incorporate the model into a tone reproduction operator that maps the vast ranges of radiances found in real and synthetic scenes into the small fixed ranges available on conventional display devices such as CRT’s and printers. The model allows the operator to address the two major problems in realistic tone reproduction: wide absolute range and high dynamic range scenes can be displayed; and the displayed images match our perceptions of the scenes at both threshold and suprathreshold levels to the degree possible given a particular display device. Although in this paper we apply our visual model to the tone reproduction problem, the model is general and can be usefully applied to image quality metrics, image compression methods, and perceptually-based image synthesis algorithms. CR Categories: I.3.0 [Computer Graphics]: General;

458 citations


Journal ArticleDOI
01 May 1998
TL;DR: In this article, a rubbing-less multi-domain vertical alignment LCD (MVA-LCD) was developed for super-high image quality by newly introduced rubbingless technology, which automatically controls the directors of the LC molecules.
Abstract: An MVA-LCD (multi-domain vertical alignment LCD) that provides super-high image quality has been developed by newly introduced rubbing-less technology. A newly introduced “protrusion” designed on the TFT substrates and on the color filter substrates automatically controls the directors of the LC molecules. By this technology we have successively developed four-domain 15″ MVA-TFT units that provide extremely wide viewing angle of more than 160 degrees, a high contrast ratio of 300:1 or more and a fast response of less than 25ms.

412 citations


Journal ArticleDOI
TL;DR: A review of perceptual image quality metrics and their application to still image compression can be found in this article, where a broad range of metrics ranging from simple mathematical measures to those which incorporate full perceptual models are examined.

383 citations


Journal ArticleDOI
TL;DR: A new methodology for the determination of an objective metric for still image coding is reported, and the PQS closely approximates the MOS, with a correlation coefficient of more than 0.92.
Abstract: A new methodology for the determination of an objective metric for still image coding is reported. This methodology is applied to obtain a picture quality scale (PQS) for the coding of achromatic images over the full range of image quality defined by the subjective mean opinion score (MOS). This PQS takes into account the properties of visual perception for both global features and localized disturbances. The PQS closely approximates the MOS, with a correlation coefficient of more than 0.92, as compared to 0.57 obtained using the conventional weighted mean-square error (WMSE). Extensions and applications of the methodology and of the resulting metric are discussed.

364 citations


Journal ArticleDOI
TL;DR: An analysis of the signal-to-noise ratio (SNR) in the resulting enhanced image shows that the SNR decreases exponentially with range and a temporal filter structure is proposed to solve this problem.
Abstract: In daylight viewing conditions, image contrast is often significantly degraded by atmospheric aerosols such as haze and fog. This paper introduces a method for reducing this degradation in situations in which the scene geometry is known. Contrast is lost because light is scattered toward the sensor by the aerosol particles and because the light reflected by the terrain is attenuated by the aerosol. This degradation is approximately characterized by a simple, physically based model with three parameters. The method involves two steps: first, an inverse problem is solved in order to recover the three model parameters; then, for each pixel, the relative contributions of scattered and reflected flux are estimated. The estimated scatter contribution is simply subtracted from the pixel value and the remainder is scaled to compensate for aerosol attenuation. This paper describes the image processing algorithm and presents an analysis of the signal-to-noise ratio (SNR) in the resulting enhanced image. This analysis shows that the SNR decreases exponentially with range. A temporal filter structure is proposed to solve this problem. Results are presented for two image sequences taken from an airborne camera in hazy conditions and one sequence in clear conditions. A satisfactory agreement between the model and the experimental data is shown for the haze conditions. A significant improvement in image quality is demonstrated when using the contrast enhancement algorithm in conjuction with a temporal filter.

342 citations


Journal ArticleDOI
TL;DR: The noise characteristics show that the proposed algorithm efficiently utilizes the data collected with optimized sampling scan, and enables the algorithm to achieve acceptable image quality and spatial resolution at a scanning speed that is about three times faster than that for single-slice CT.
Abstract: Efforts are being made to develop a new type of CT system that can scan volumes over a large range within a short time with thin slice images. One of the most promising approaches is the combination of helical scanning with multi-slice CT, which involves several detector arrays stacked in the z direction. However, the algorithm for image reconstruction remains one of the biggest problems in multi-slice CT. Two helical interpolation methods for single-slice CT, 360LI and 180LI, were used as starting points and extended to multi-slice CT. The extended methods, however, had a serious image quality problem due to the following three reasons: (1) excessively close slice positions of the complementary and direct data, resulting in a larger sampling interval; (2) the existence of several discontinuous changeovers in pairs of data samples for interpolation; and (3) the existence of cone angles. Therefore we have proposed a new algorithm to overcome the problem. It consists of the following three parts: (1) optimized sampling scan; (2) filter interpolation; and (3) fan-beam reconstruction. Optimized sampling scan refers to a special type of multi-slice helical scan developed to shift the slice position of complementary data and to acquire data with a much smaller sampling interval in the z direction. Filter interpolation refers to a filtering process performed in the z direction using several data. The normal fan-beam reconstruction technique is used. The section sensitivity profile (SSP) and image quality for four-array multi-slice CT were investigated by computer simulations. Combinations of three types of optimized sampling scan and various filter widths were used. The algorithm enables us to achieve acceptable image quality and spatial resolution at a scanning speed that is about three times faster than that for single-slice CT. The noise characteristics show that the proposed algorithm efficiently utilizes the data collected with optimized sampling scan. The new algorithm allows suitable combinations of scan and filter parameters to be selected to meet the purpose of each examination.

341 citations


Journal ArticleDOI
TL;DR: This paper formulates a corresponding expectation-maximization (EM) algorithm, as well as a method for estimating noise properties at the ML estimate, for an idealized two-dimensional positron emission tomography [2-D PET] detector.
Abstract: Using a theory of list-mode maximum-likelihood (ML) source reconstruction presented recently by Barrett et al. (1997), this paper formulates a corresponding expectation-maximization (EM) algorithm, as well as a method for estimating noise properties at the ML estimate. List-mode ML is of interest in cases where the dimensionality of the measurement space impedes a binning of the measurement data. It can be advantageous in cases where a better forward model can be obtained by including more measurement coordinates provided by a given detector. Different figures of merit for the detector performance can be computed from the Fisher information matrix (FIM). This paper uses the observed FIM, which requires a single data set, thus, avoiding costly ensemble statistics. The proposed techniques are demonstrated for an idealized two-dimensional (2-D) positron emission tomography (PET) [2-D PET] detector. The authors compute from simulation data the improved image quality obtained by including the time of flight of the coincident quanta.

341 citations


Journal ArticleDOI
TL;DR: This paper presents a comprehensive analysis and classification of the numerous coding artifacts which are introduced into the reconstructed video sequence through the use of the hybrid MC/DPCM/DCT video coding algorithm.

Journal ArticleDOI
TL;DR: A new method is proposed to estimate the image noise variance for this type of data distribution based on a double image acquisition, thereby exploiting the knowledge of the Rice distribution moments.

Journal ArticleDOI
TL;DR: A new image compression technique called DjVu is presented that enables fast transmission of document images over low-speed connections, while faithfully reproducing the visual aspect of the document, including color, fonts, pictures, and paper texture.

Journal ArticleDOI
TL;DR: The experimental results show that the proposed watermarking technique results in an almost invisible difference between the watermarked image and the original image, and is robust to common image processing operations and JPEG lossy compression.
Abstract: In this paper, a multiresolution-based technique for embedding digital "watermarks" into images is proposed. The watermarking technique has been proposed as a method by hiding secret information in the images so as to discourage unauthorized copying or attesting the origin of the images. In our method, we take advantage of multiresolution signal decomposition. Both the watermark and the host image are composed of multiresolution representations with different structures and then the decomposed watermarks of different resolution are embedded into the corresponding resolution of the decomposed images. In case of image quality degradation, the low-resolution rendition of the watermark will still be preserved within the corresponding low-resolution components of the image. The experimental results show that the proposed watermarking technique results in an almost invisible difference between the watermarked image and the original image, and is robust to common image processing operations and JPEG lossy compression.

Proceedings ArticleDOI
24 Jul 1998
TL;DR: A perceptually based approach for selecting image samples has been developed and the resulting new image quality model was inserted into an image synthesis program by first modifying the rendering algorithm so that it computed a wavelet representation.
Abstract: A perceptually based approach for selecting image samples has been developed An existing image processing vision model has been extended to handle color and has been simplified to run efficiently The resulting new image quality model was inserted into an image synthesis program by first modifying the rendering algorithm so that it computed a wavelet representation In addition to allowing image quality to be determined as the image was generated, the wavelet representation made it possible to use statistical information about the spatial frequency distribution of natural images to estimate values where samples were yet to be taken Tests on the image synthesis algorithm showed that it correctly handled achromatic and chromatic spatial detail and that it was able predict and compensate for masking effects The program was also shown to produce images of equivalent visual quality while using different rendering techniques CR

Journal ArticleDOI
TL;DR: ECG-oriented image reconstructions improve the quality of heart imaging with spiral CT significantly and appear adequate to assess coronary calcium measurements with conventional subsecond spiral CT.
Abstract: Subsecond computed tomography (CT) scanning offers potential for improved heart imaging. We therefore developed and validated dedicated reconstruction algorithms for imaging the heart with subsecond spiral CT utilizing electrocardiogram (ECG) information. We modified spiral CT z-interpolation algorithms on a subsecond spiral CT scanner. Two new classes of algorithms were investigated: (a) 180 degrees CI (cardio interpolation), a piecewise linear interpolation between adjacent spiral data segments belonging to the same heart phase where segments are selected by correlation with the simultaneously recorded ECG signal and (b) 180 degrees CD (cardio delta), a partial scan reconstruction of 180 degrees + delta with delta < fan angle, resulting in reduced effective scan times of less than 0.5 s. Computer simulations as well as processing of clinical data collected with 0.75 s scan time were carried out to evaluate these new approaches. Both 180 degrees CI and 180 degrees CD provided significant improvements in image quality. Motion artifacts in the reconstructed images were largely reduced as compared to standard spiral reconstructions; in particular, coronary calcifications were delineated more sharply and multiplanar reformations showed improved contiguity. However, new artifacts in the image plane are introduced, mostly due to the combination of different data segments. ECG-oriented image reconstructions improve the quality of heart imaging with spiral CT significantly. Image quality and the display of coronary calcification appear adequate to assess coronary calcium measurements with conventional subsecond spiral CT.

01 Jan 1998
TL;DR: All of the traditional steganographic techniques have limited information-hiding capacity, because the principle was to replace a special part of the frequency components of the vessel image, or to replace all the least significant bits of a multi-valued image with the secret information.
Abstract: Steganography is a technique to hide secret information in some other data without leaving any apparent evidence of data alternation. All of the traditional steganographic techniques have limited information-hiding capacity. They can hide only 10 percent of the data mounts of the vessel. This is because the principle of those techniques was either to replace a special part of the frequency components of the vessel image, or to replace all the least significant bits of a multi-valued image with the secret information.

Journal ArticleDOI
TL;DR: It is concluded that the linear interpolation method, which takes correlation into consideration, is the most suitable for consumer product applications such as digital still cameras.
Abstract: This paper discusses the interpolation technique applied to the Bayer primary color method, used frequently as the pixel structure of CCD image sensors for digital still cameras. Eight typical types of interpolation methods are discussed from three viewpoints: the characteristics of the interpolated images, the processing time required to realize their methods based on a 32-bit MCU for embedded applications, and the quality of the resultant images. In terms of reducing the occurrences of pseudocolor and achieving good color restoration, the linear interpolation method taking G's correlation determined by using R/B pixels into consideration was found to be excellent. The measured machine cycle of the interpolation methods was approximately 46 cycles per pixel. Therefore, every method was able to interpolate a VGA-size image in approximately 0.2 seconds with the MCU operating at 60 MHz. In terms of the S/N ratio, a good image quality was obtained through the linear interpolation methods, even with shorter processing time. Based on these results it is concluded that the linear interpolation method, which takes correlation into consideration, is the most suitable for consumer product applications such as digital still cameras.

Proceedings ArticleDOI
TL;DR: A novel approach based on a shift invariant extension of the 2D discrete wavelet transform, which yields an overcomplete and thusshift invariant multiresolution signal representation, is proposed, which outperforms the otherMultiresolution fusion methods with respect to temporal stability and consistency.
Abstract: In pixel-level image sequence fusion, a composite image sequence has to be built of several spatially registered input image sequences. One of the primary goals in image sequence fusion is the temporal stability and consistency of the fused image sequence. To fulfill the preceding desiderata, we propose a novel approach based on a shift invariant extension of the 2D discrete wavelet transform, which yields an overcomplete and thus shift invariant multiresolution signal representation. The advantage of the shift invariant fusion method is the improved temporal stability and consistency of the fused sequence, compared to other multiresolution fusion methods. To evaluate temporal stability and consistency of the fused sequence we introduce a quality measure based on the mutual information between the inter-frame-differences (IFD) of the input sequences and the fused image sequence. If the mutual information is high, the information in the IFD of the fused sequence is low with respect to the information present in the IFDs of the input sequences, indicating a stable and consistent fused image sequence. We evaluate the performance of several multiresolution fusion schemes on a real word image sequence pair and show that the shift invariant fusion method outperforms the other multiresolution fusion methods with respect to temporal stability and consistency.

Proceedings ArticleDOI
16 Aug 1998
TL;DR: Results shown indicate that the calculated Importance Maps correlate well with human perception of visually important regions.
Abstract: We present a method for automatically determining the perceptual importance of different regions in an image. The algorithm is based on human visual attention and eye movement characteristics. Several features known to influence human visual attention are evaluated for each region of a segmented image to produce an importance value for each factor and region. These are combined to produce an Importance Map, which classifies each region of the image in relation to its perceptual importance. Results shown indicate that the calculated Importance Maps correlate well with human perception of visually important regions. The Importance Maps can be used in a variety of applications, including compression, machine vision, and image databases. Our technique is computationally efficient and flexible, and can easily be extended to specific applications.

Proceedings ArticleDOI
24 Jul 1998
TL;DR: The variable exposure flat-field correction methodology proposed here provides an improved match to the fixed-point noise superimposed in the uncorrected image, particularly for the higher spatial frequencies in the image as demonstrated by DQE(f) measurements.
Abstract: The effects of the stationary noise patterns and variable pixel responses that commonly occur with uniform exposure of digital detectors can be effectively reduced by simple 'flat- field' image processing methods. These methods are based upon a linear system response and the acquisition of an image (or images) acquired at a high exposure to create an inverse matrix of values that when applied to an uncorrected image, remove the effects of the stationary noise components. System performance is optimized when the correction image is totally free of statistical variations. However, the stationary noise patterns will not be effectively removed for flat-field images that are acquired at a relatively low exposure or for systems with non-linear response to incident exposure variations. A reduction in image quality occurs with the incomplete removal of the stationary noise patterns, resulting in a loss of detective quantum efficiency of the system. A more flexible approach to the global flat-field correction methodology is investigated using a pixel by pixel least squares fit to 'synthesize' a variable flat-field image based upon the pixel value (incident exposure) of the image to be corrected. All of the information is stored in two 'equivalent images' containing the slope and intercept parameters. The methodology provides an improvement in the detective quantum efficiency (DQE) due to the greater immunity of the stationary noise variation encoded in the slope/intercept parameters calculated on a pixel by pixel basis over a range of incident exposures. When the raw image contains a wide range of incident exposures (e.g., transmission through an object) the variable exposure flat-field correction methodology proposed here provides an improved match to the fixed-point noise superimposed in the uncorrected image, particularly for the higher spatial frequencies in the image as demonstrated by DQE(f) measurements. Successful application to clinical digital mammography biopsy images has been demonstrated, and benefit to other digital detectors appears likely.

Journal ArticleDOI
Jian-yu Lu1
TL;DR: The quality (resolution and contrast) of constructed images is virtually identical for both methods, except that the Fourier method is simpler to implement.
Abstract: Limited diffraction beams have a large depth of field and have many potential applications. Recently, a new method (Fourier method) was developed with limited diffraction beams for image construction. With the method and a single plane wave transmission, both 2D (two-dimensional) and 3D (three-dimensional) images of a very high frame rate (up to 3750 frames/s for a depth of 200 mm in biological soft tissues) and a high signal-to-noise ratio (SNR) can be constructed with relatively simple and inexpensive hardware. If limited diffraction beams of different parameters are used in both transmission and reception and transducer aperture is shaded with a cosine function, high-resolution and low-sidelobe images can be constructed with the new method without montage of multiple frames of images [the image quality is comparable to that obtained with a transmit-receive (two-way) dynamically focused imaging system]. In this paper, the Fourier method was studied with both experiment and computer simulation for 2D B-mode imaging. In the experiment, two commercial broadband 1D array transducers (48 and 64 elements) of different aperture sizes (18.288 and 38.4 mm) and center frequencies (2.25 and 2.5 MHz) were used to construct images of different viewing sizes. An ATS539 tissue-equivalent phantom of an average frequency-dependent attenuation of 0.5 dB/MHz/cm was used as a test object. To obtain high frame rate images, a single plane wave pulse (broadband) was transmitted with the arrays. Echoes received with the arrays were processed with both the Fourier and conventional dynamic focusing (delay-and-sum) methods to construct 2D B-mode images. Results show that the quality (resolution and contrast) of constructed images is virtually identical for both methods, except that the Fourier method is simpler to implement. Both methods have also a similar sensitivity to phase aberration distortions. Excellent agreement among theory, simulation, and experiment was obtained.

Patent
18 Jun 1998
TL;DR: In this paper, a catheter including ultrasonic apparatus is introduced into and may be moved through a bodily lumen, and a processor coupled to the catheter is programmed to derive a first image or series of images and a second image/series of images from the detected ultrasound signals.
Abstract: A device and method for intravascular ultrasound imaging. A catheter including ultrasonic apparatus is introduced into and may be moved through a bodily lumen. The apparatus transmits ultrasonic signals and detects reflected ultrasound signals which contain information relating to the bodily lumen. A processor coupled to the catheter is programmed to derive a first image or series of images and a second image or series of images from the detected ultrasound signals. The processor is also programmed to compare the second image or series of images to the first image or series of images respectively. The processor may be programmed to stabilize the second image in relation to the first image and to limit drift. The processor may also be programmed to monitor the first and second images for cardiovascular periodicity, image quality, temporal change and vasomotion. It can also match the first series of images and the second series of images.

Journal ArticleDOI
TL;DR: A similar image quality to the current single-slice MVCT scanner is achieved with the advantage of providing tens of tomographic slices for a single gantry rotation.

Proceedings ArticleDOI
04 Oct 1998
TL;DR: This work considers the problem of image coding for communication systems that use diversity to overcome channel impairments, and forms a discrete optimization problem, whose solution gives parameters of the proposed encoder yielding optimal performance in an operational sense.
Abstract: We consider the problem of image coding for communication systems that use diversity to overcome channel impairments. We focus on the special case in which there are two channels of equal capacity between a transmitter and a receiver. Our designs are based on a combination of techniques successfully applied to the construction of some of the most efficient wavelet based image coding algorithms, with multiple description scalar quantizers (MDSQs). For a given image, we produce two bitstreams, to be transmitted over each channel. Should one of the channels fail, each individual description guarantees a minimum image quality specified by the user. However, if both descriptions arrive at destination, they are combined to produce a higher quality image than that achievable based on individual descriptions. We formulate a discrete optimization problem, whose solution gives parameters of the proposed encoder yielding optimal performance in an operational sense. Simulation results are presented.

Patent
26 Mar 1998
TL;DR: In this paper, an electronic still imaging system employs an image sensor comprised of discrete light sensitive picture elements overlaid with a color filter array (CFA) pattern to produce color image data corresponding to the CFA pattern.
Abstract: An electronic still imaging system employs an image sensor comprised of discrete light sensitive picture elements overlaid with a color filter array (CFA) pattern to produce color image data corresponding to the CFA pattern, an A/D converter for producing digital CFA image data from the color image data, and a memory for storing the digital CFA image data from the picture elements. A processor enables the processing of the digital CFA image data to produce finished image data, and the digital CFA image data and the finished image data are both stored together in an image file. This enables image processing from raw camera data to final output data to be completed in a single, integrated process to provide improved image quality when printing.

Proceedings ArticleDOI
17 Jul 1998
TL;DR: A new video quality metric is described that is an extension of these still image metrics into the time domain, based on the Discrete Cosine Transform, in order that might be applied in the widest range of applications.
Abstract: The advent of widespread distribution of digital video creates a need for automated methods for evaluating the visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics, and the economic need to reduce bit-rate to the lowest level that yields acceptable quality. In previous work, we have developed visual quality metrics for evaluating, controlling,a nd optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. Here I describe a new video quality metric that is an extension of these still image metrics into the time domain. Like the still image metrics, it is based on the Discrete Cosine Transform. An effort has been made to minimize the amount of memory and computation required by the metric, in order that might be applied in the widest range of applications. To calibrate the basic sensitivity of this metric to spatial and temporal signals we have made measurements of visual thresholds for temporally varying samples of DCT quantization noise.© (1998) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
24 Jun 1998
TL;DR: A unified theory of tomosynthesis is derived in the context of linear system theory, and a general four-step filter design concept is presented, which is valid for any specific scan geometry.
Abstract: Tomosynthesis provides only incomplete 3D-data of the imaged object. Therefore it is important for reconstruction tasks to take all available information carefully into account. We are focusing on geometrical aspects of the scan process which can be incorporated into reconstruction algorithms by filtered backprojection methods. Our goal is a systematic approach to filter design. A unified theory of tomosynthesis is derived in the context of linear system theory, and a general four-step filter design concept is presented. Since the effects of filtering are understandable in this context, a methodical formulation of filter functions is possible in order to optimize image quality regarding the specific requirements of any application. By variation of filter parameters the slice thickness and the spatial resolution can easily be adjusted. The proposed general concept of filter design is exemplarily discussed for circular scanning but is valid for any specific scan geometry. The inherent limitations of tomosynthesis are pointed out and strategies for reducing the effects of incomplete sampling are developed. Results of a dental application show a striking improvement in image quality.

Journal ArticleDOI
08 Nov 1998
TL;DR: In this paper, the authors investigated the effect of simpler detector response function models on image quality in maximum likelihood expectation maximization reconstruction and found that DRF oversimplification may affect visual image quality and image quantification dramatically.
Abstract: One limitation in a practical implementation of statistical iterative image reconstruction is to compute a transition matrix accurately modeling the relationship between projection and image spaces. Detector response function (DRF) in positron emission tomography (PET) is broad and spatially-variant, leading to large transition matrices taking too much space to store. In this work, the authors investigate the effect of simpler DRF models on image quality in maximum likelihood expectation maximization reconstruction. The authors studied 6 cases of modeling projection/image relationship: tube/pixel geometric overlap with tubes centered on detector face; same as previous with tubes centered on DRF maximum; two different fixed-width Gaussian functions centered on DRF maximum weighing tube/pixel overlap; same as previous with a Gaussian of the same spectral resolution as DRF; analytic DRF based on linear attenuation of /spl gamma/-rays in detector arrays weighing tube/pixel overlap. The authors found that DRF oversimplification may affect visual image quality and image quantification dramatically, including artefact generation. They showed that analytic DRF yielded images of excellent quality for a small animal PET system with long, narrow detectors and generated a transition matrix for 2-D reconstruction that could be easily fitted into the memory of current stand-alone computers.

Journal ArticleDOI
TL;DR: This model has been able to mimic quite accurately the temporally varying subjective picture quality of video sequences as recorded by the ITU-R SSCQE method.