scispace - formally typeset
Search or ask a question

Showing papers on "Image quality published in 1993"


Proceedings ArticleDOI
08 Sep 1993
TL;DR: Here I show how to compute a matrix that is optimized for a particular image, and custom matrices for a number of images show clear improvement over image-independent matrices.
Abstract: This presentation describes how a vision model incorporating contrast sensitivity, contrast masking, and light adaptation is used to design visually optimal quantization matrices for Discrete Cosine Transform image compression. The Discrete Cosine Transform (DCT) underlies several image compression standards (JPEG, MPEG, H.261). The DCT is applied to 8x8 pixel blocks, and the resulting coefficients are quantized by division and rounding. The 8x8 'quantization matrix' of divisors determines the visual quality of the reconstructed image; the design of this matrix is left to the user. Since each DCT coefficient corresponds to a particular spatial frequency in a particular image region, each quantization error consists of a local increment or decrement in a particular frequency. After adjustments for contrast sensitivity, local light adaptation, and local contrast masking, this coefficient error can be converted to a just-noticeable-difference (jnd). The jnd's for different frequencies and image blocks can be pooled to yield a global perceptual error metric. With this metric, we can compute for each image the quantization matrix that minimizes the bit-rate for a given perceptual error, or perceptual error for a given bit-rate. Implementation of this system demonstrates its advantages over existing techniques. A unique feature of this scheme is that the quantization matrix is optimized for each individual image. This is compatible with the JPEG standard, which requires transmission of the quantization matrix.

776 citations


Proceedings ArticleDOI
01 Sep 1993
TL;DR: An adaptive display algorithm for interactive frame rates during visualization of very complex virtual environments to adjust image quality adaptively to maintain a uniform, user-specified target frame rate.
Abstract: We describe an adaptive display algorithm for interactive frame rates during visualization of very complex virtual environments. The algorithm relies upon a hierarchical model representation in which objects are described at multiple levels of detail and can be drawn with various rendering algorithms. The idea behind the algorithm is to adjust image quality adaptively to maintain a uniform, user-specified target frame rate. We perform a constrained optimization to choose a level of detail and rendering algorithm for each potentially visible object in order to generate the “best” image possible within the target frame time. Tests show that the algorithm generates more uniform frame rates than other previously described detail elision algorithms with little noticeable difference in image quality during visualization of complex models. CR

744 citations


Journal ArticleDOI
TL;DR: This paper considers the task of detection of a weak signal in a noisy image and suggests the Hotelling model with channels as a useful model observer for the purpose of assessing and optimizing image quality with respect to simple detection tasks.
Abstract: Image quality can be defined objectively in terms of the performance of some "observer" (either a human or a mathematical model) for some task of practical interest. If the end user of the image will be a human, model observers are used to predict the task performance of the human, as measured by psychophysical studies, and hence to serve as the basis for optimization of image quality. In this paper, we consider the task of detection of a weak signal in a noisy image. The mathematical observers considered include the ideal Bayesian, the nonprewhitening matched filter, a model based on linear-discriminant analysis and referred to as the Hotelling observer, and the Hotelling and Bayesian observers modified to account for the spatial-frequency-selective channels in the human visual system. The theory behind these observer models is briefly reviewed, and several psychophysical studies relating to the choice among them are summarized. Only the Hotelling model with channels is mathematically tractable in all cases considered here and capable of accounting for all of these data. This model requires no adjustment of parameters to fit the data and is relatively insensitive to the details of the channel mechanism. We therefore suggest it as a useful model observer for the purpose of assessing and optimizing image quality with respect to simple detection tasks.

465 citations


Book
01 Dec 1993
TL;DR: In this paper, the authors present the following concepts in IR technology general measuring techniques focus and system resolution system responsivity noise contrast, modulation, and phase transfer functions geometic transfer function observer interpretation of image quality automated testing statistical analysis
Abstract: Infrared imaging system operation basic concepts in IR technology general measuring techniques focus and system resolution system responsivity noise contrast, modulation, and phase transfer functions geometic transfer function observer interpretation of image quality automated testing statistical analysis

219 citations


Journal ArticleDOI
Jong Beom Ra1, Cy Rim1
TL;DR: A new fast imaging method using a subencoding data acquisition scheme and a multiple coil receiver system is proposed and demonstrated, which can be easily adapted to conventional imaging methods including fast imaging to further reduce the scan time.
Abstract: A new fast imaging method using a subencoding data acquisition scheme and a multiple coil receiver system is proposed and demonstrated. In this method, a set of aliased images are produced from receiver coils by using the subencoded data without sacrificing the desired resolution, and resolved to an aliasing-free image by using the distance-dependent sensitivity information of each coil. The reduction rate of data acquisition time is proportional to the number of receiver coils. This method can be easily adapted to conventional imaging methods including fast imaging to further reduce the scan time.

206 citations


Journal ArticleDOI
TL;DR: In this article, a new method has been developed for testing the optical quality of ground-based telescopes using wideband long-exposure defocused stellar images recorded with current astronomical CCD cameras using an iterative algorithm that simulates closed-loop wavefront compensation in adaptive optics.
Abstract: A new method has been developed for testing the optical quality of ground-based telescopes Aberrations are estimated from wideband long-exposure defocused stellar images recorded with current astronomical CCD cameras An iterative algorithm is used that simulates closed-loop wave-front compensation in adaptive optics Compared with the conventional Hartmann test, the new method is easier to implement, has similar accuracy, and provides a higher spatial resolution on the reconstructed wave front It has been applied to several astronomical telescopes and has been found to be a powerful diagnostic tool for improving image quality

188 citations


Journal ArticleDOI
31 Oct 1993
TL;DR: It is concluded that reconstruction methods which accurately compensate for nonuniform attenuation can substantially reduce image degradation caused by variations in patient anatomy in cardiac SPECT.
Abstract: Patient anatomy has complicated effects on cardiac SPECT images. The authors investigated reconstruction methods which substantially reduced these effects for improved image quality. A 3D mathematical cardiac-torso (MCAT) phantom which models the anatomical structures in the thorax region were used in the study. The phantom was modified to simulate variations in patient anatomy including regions of natural thinning along the myocardium, body size, diaphragmatic shape, gender, and size and shape of breasts for female patients. Distributions of attenuation coefficients and Tl-201 uptake in different organs in a normal patient were also simulated. Emission projection data were generated from the phantoms including effects of attenuation and detector response. The authors have observed the attenuation-induced artifacts caused by patient anatomy in the conventional FBP reconstructed images. Accurate attenuation compensation using iterative reconstruction algorithms and attenuation maps substantially reduced the image artifacts and improved quantitative accuracy. The authors conclude that reconstruction methods which accurately compensate for nonuniform attenuation can substantially reduce image degradation caused by variations in patient anatomy in cardiac SPECT. >

148 citations


Journal ArticleDOI
TL;DR: This technique enables the collection of data necessary for image reconstruction in a reduced number of phase‐encoded acquisitions, which results in a 50% reduction in minimum scan time and may be useful in time‐critical procedures.
Abstract: A technique is described for the simultaneous acquisition of MRI data using two independent receiver coils surrounding the same region of tissue, which enables the collection of data necessary for image reconstruction in a reduced number of phase-encoded acquisitions This results in a 50% reduction in minimum scan time and may be useful in time-critical procedures The algorithm and imaging procedures are described, and example images are shown that illustrate the reconstruction Signal to noise is decreased by the square root of the time savings, making this technique applicable to cases in which the need to decrease minimum scan time outweighs the signal to noise penalty

140 citations


Journal ArticleDOI
TL;DR: The measurement of SNR is based on implementing algorithmic realizations of specified observers and analysing their responses while actually performing a specified detection task of interest and has been extended to include temporally varying images and dynamic imaging systems.
Abstract: A method of measuring the image quality of medical imaging equipment is considered within the framework of statistical decision theory. In this approach, images are regarded as random vectors and image quality is defined in the context of the image information available for performing a specified detection or discrimination task. The approach provides a means of measuring image quality, as related to the detection of an image detail of interest, without reference to the actual physical mechanisms involved in image formation and without separate measurements of signal transfer characteristics or image noise. The measurement does not, however, consider deterministic errors in the image; they need a separate evaluation for imaging modalities where they are of concern. The detectability of an image detail can be expressed in terms of the ideal observer's signal-to-noise ratio (SNR) at the decision level.

127 citations


Journal ArticleDOI
TL;DR: The quality of reconstructed images from in-line holograms can be seriously degraded by the linear superposition of twin images having the same information but different foci, so an iterative procedure for twin-image elimination is proposed, which can reconstruct complex objects, provided that they are not recorded in very near-field conditions.
Abstract: The quality of reconstructed images from in-line holograms can be seriously degraded by the linear superposition of twin images having the same information but different foci. Starting from the reconstructed field at the real image plane, we make use of the uncontaminated information contained in the out-of-focus wave (virtual image) outside the in-focus wave (real image) support, together with a finite-support constraint, to form an iterative procedure for twin-image elimination. This algorithm can reconstruct complex objects, provided that they are not recorded in very near-field conditions. For real objects additional constraints can be imposed, extending the algorithm application to very near-field conditions. The algorithm’s convergence properties are studied in both cases, and some examples are shown.

104 citations


Journal ArticleDOI
TL;DR: In this paper, a method for removing the azimuth ambiguities from synthetic aperture radar (SAR) images is proposed, which corresponds to an ideal filter concept, where an ideal impulse response function is obtained even in the presence of several phase and amplitude errors.
Abstract: A method for removing the azimuth ambiguities from synthetic aperture radar (SAR) images is proposed. The basic idea is to generate a two-dimensional reference function for SAR processing which provides, in addition to the matched filtering for the unaliased part of the received signal, the deconvolution of the azimuth ambiguities. This approach corresponds to an ideal filter concept, where an ideal impulse response function is obtained even in the presence of several phase and amplitude errors. Modeling the sampled azimuth signal shows that the absolute phase value of the ambiguities cannot easily be determined due to their undersampling. The concept of the ideal filter is then extended to accommodate the undefined phase of the ambiguities and also the fading of the azimuth signal. Raw data from the E-SAR system have been used to verify the improvement in image quality obtained by the new method. It has a substantial advantage in enabling the pulse-repetition frequency (PRF) constraints in the SAR system design to be relaxed and also for improving SAR image quality and interpretation. >

Journal ArticleDOI
TL;DR: Stochastic temporal filtering techniques are proposed to enhance clinical fluoroscopy sequences corrupted by quantum mottle and the problem of displacement field estimation is treated in conjunction with the filtering stage to ensure that the temporal correlations are taken along the direction of motion to prevent object blur.
Abstract: Clinical angiography requires hundreds of X-ray images, putting the patients and particularly the medical staff at risk. Dosage reduction involves an inevitable sacrifice in image quality. In this work, the latter problem is addressed by first modeling the signal-dependent, Poisson-distributed noise that arises as a result of this dosage reduction. The commonly utilized noise model for single images is shown to be obtainable from the new model. Stochastic temporal filtering techniques are proposed to enhance clinical fluoroscopy sequences corrupted by quantum mottle. The temporal versions of these filters as developed here are more suitable for filtering image sequences, as correlations along the time axis can be utilized. For these dynamic sequences, the problem of displacement field estimation is treated in conjunction with the filtering stage to ensure that the temporal correlations are taken along the direction of motion to prevent object blur. >

Journal ArticleDOI
TL;DR: The feasibility of a mobile video handset is investigated for Rayleigh-fading channels, where transmissions must be confined to the channel's coherence bandwidth to avoid the deployment of complex high-power-consumption channel equalizers.
Abstract: The feasibility of a mobile video handset is investigated for Rayleigh-fading channels, where transmissions must be confined to the channel's coherence bandwidth to avoid the deployment of complex high-power-consumption channel equalizers. This necessitates the utilization of a low-bit-rate image codec error-protected by embedded low-complexity BCH codecs and spectrally efficient 16-level quadrature amplitude modulation (16 QAM). Motion-compensated nonuniform seven-band subband coding with subband-specific scanning, adaptive quantization, runlength coding, and adaptive buffering to equalize bit-rate fluctuations offer good objective and subjective image quality at moderate complexity and a bit-rate of 55 kb/s. Using a twin-class embedded-BCH error protection as well as pilot symbol 16 QAM and diversity assisted, the 22 kBd candidate system yields unimpaired image quality for average channel signal-to-noise ratios (SNRs) in excess of about 16-18 dB, when the mobile speed is 4 miles/h. >

Journal ArticleDOI
TL;DR: The experiments show that wavelet encoding by selective excitation of wavelet‐shaped profiles is feasible, and there is no discernible degradation in image quality due to the wavelet encoded images.
Abstract: Reconstructions of images from wavelet-encoded data are shown. The method of MR wavelet encoding in one dimension was proposed previously by Weaver and Healy. The technique relies on selective excitation with wavelet-shaped profiles generated by special radio-frequency waveforms. The result of the imaging sequence is a set of inner products of the image with orthogonal functions of the wavelet basis. Inversion of the wavelet data is accomplished with an efficient algorithm with processing times comparable with those of a fast Fourier transform. The experiments show that wavelet encoding by selective excitation of wavelet-shaped profiles is feasible. Wavelet-encoded images are compared with phase-encoded images that have a similar signal-to-noise ratio, and there is no discernible degradation in image quality due to the wavelet encoding. Potential benefits of wavelet encoding are briefly discussed.

Patent
19 Jan 1993
TL;DR: In this article, a method and apparatus for objectively measuring the image quality of a destination video signal generates measurement parameters that are indicative of human image quality perceptions, which are generated for a variety of test scenes and types of image impairment.
Abstract: A method and apparatus for objectively measuring the image quality of a destination video signal generates measurement parameters that are indicative of human image quality perceptions. Subjective human test panel results are generated for a variety of test scenes and types of image impairment. Objective test results are also generated by the apparatus of the present invention for the variety of test scenes and image impairments. A statistical analysis means statistically analyzes the subjective and objective test results to determine operation of the apparatus. Accordingly, when the apparatus extracts test frames from the actual source and destination video signals and compares them, image quality parameters are output by the apparatus which are based on human image quality perceptions.

Patent
04 Jun 1993
TL;DR: In this paper, a video signal processing apparatus for correcting vibration effects in a video camera that has a CCD image pickup device having a number of lines greater than the number of channels of the standard NTSC television system is presented.
Abstract: A video signal processing apparatus for correcting vibration effects in a video camera that has a CCD image pickup device having a number of lines greater than the number of lines of the standard NTSC television system. The lines used to generate the image are shifted during the vertical blanking interval to correct for vibrations of the video camera, and the overflow charges which are caused by transferring the CCD image pickup device at a high speed during the blanking period are absorbed into a semiconductor drain element arranged in parallel with the horizontal transfer register of the CCD image pickup device. Defective pixels in the image pickup device are compensated by storing the addresses of defective pixels and interpolating at those positions, during vibration correction the addresses of the defective pixels are shifted to correspond to the amount of vibration correction. A read clock and a write clock for the line memory that is used to perform the vibration correction in the horizontal direction are set to line-locked clock signals of different frequencies, thereby preventing deterioration of the picture quality upon vibration correction. The window used to detect optical information for automatic camera control is also shifted in accordance with the shift amount used to perform vibration correction.

Journal ArticleDOI
TL;DR: This work introduces a new method of gray-level image halftoning that uses visual modeling within the framework of error diffusion to improve the image quality of halftoned images.
Abstract: Continued advances in binary image printers have spurred an increased interest in the use of digital image halftoning to generate low-cost images that have the appearance of gray levels. However, at current print resolutions for desktop applications, i.e., 300–400 dots/in. (dpi), the binary noise resulting from halftoning is clearly visible at normal viewing distances. Halftoning algorithms that reduce the visibility of this noise result in smoother gray levels and higher-quality output images. We introduce a new method of gray-level image halftoning that uses visual modeling within the framework of error diffusion to improve the image quality of halftoned images.

Proceedings ArticleDOI
Seong-Won Lee1, Joonki Paik1
27 Apr 1993
TL;DR: The proposed adaptive version of a B-spline interpolation algorithm exhibits significant improvements in image quality compared with the conventional B- Spline type for algorithm, especially with high magnification ratio, such as four times or more.
Abstract: An adaptive version of a B-spline interpolation algorithm is proposed. Adaptivity is used in two different phases: (1) adaptive zero order interpolation is realized by considering directional edge information, and (2) adaptive length of the moving average filter in four directions is obtained by computing the local image statistics. The proposed algorithm exhibits significant improvements in image quality compared with the conventional B-spline type for algorithm, especially with high magnification ratio, such as four times or more. Another advantage of the proposed algorithm is its simplicity in both computation and implementations. >

Journal ArticleDOI
TL;DR: With the new level adaptive overdrive (LAO) method, the response time was reduced to 17 ms, which is about one-half to one-third that of the conventional method and is satisfactory for TV applications.
Abstract: — A new low-image-lag drive method is proposed for large-size LCTVs, taking into account the input signal and inter-field differential signal dependence of the response time. It was shown that the gray-level response time was 30–60 ms, which is 2–3 times as long as the ON-OFF bi-level response time, and was expressed as a linear function of the input voltage. With the new level adaptive overdrive (LAO) method, the response time was reduced to 17 ms, which is about one-half to one-third that of the conventional method and is satisfactory for TV applications. A 10.4-in. panel has been used to verify the effect of LAO on image quality. Moving images have been clearly observed using the LAO method.

Patent
26 Jul 1993
TL;DR: In this paper, an image is formed by a number of dots obtained by discharging the ink from a print head to attach the ink onto the cloths, the ink amount discharged from the printing head onto cloths is controlled to produce ink jet printed products so that the average value of equivalent circle diameter for each dot after image formation may be three-fourths or lest the average values of diameters of fibers constituting said cloths.
Abstract: An object is to provide ink jet printed products superior in the image quality such that ink jet printing onto the cloths satisfy the various conditions regarding the density, resolution, blurring, graininess of dot. To accomplish this object, when an image is formed by a number of dots obtained by discharging the ink from a print head to attach the ink onto the cloths, the ink amount dis-charged from the printing head onto the cloths is controlled to produce ink jet printed products so that the average value of equivalent circle diameter for each dot after image formation may be three-fourths or lest the average value of diameters of fibers constituting said cloths. Thereby, ink jet printed products excellent in image quality can be obtained with blurs reduced and high graininess of dot.

Journal Article
A Wenzel1
TL;DR: The era of digital imaging in dentistry has certainly commenced and current intraoral digital systems have been shown to provide definite diagnostic advantages, the major advantages may, however, be the significant dose reductions and the ability for image quality manipulation.

Journal ArticleDOI
TL;DR: The local cosine transform (LCT) can be added as an optional step for improving the quality of existing DCT (JPEG) encoders by reducing the blocking effect and smoothing the image quality.
Abstract: This paper presents the local cosine transform (LCT) as a new method for the reduction and smoothing of the blocking effect that appears at low bit rates in image coding algorithms based on the discrete cosine transform (DCT). In particular, the blocking effect appears in the JPEG baseline sequential algorithm. Two types of LCT were developed: LCT-IV is based on the DCT type IV, and LCT-II is based on DCT type II, which is known as the standard DCT. At the encoder side the image is first divided into small blocks of pixels. Both types of LCT have basis functions that overlap adjacent blocks. Prior to the DCT coding algorithm a preprocessing phase in which the image is multiplied by smooth cutoff functions (or bells) that overlap adjacent blocks is applied. This is implemented by folding the overlapping parts of the bells back into the original blocks, and thus it permits the DCT algorithm to operate on the resulting blocks. At the decoder side the inverse LCT is performed by unfolding the samples back to produce the overlapped bells. The purpose of the multiplication by the bell is to reduce the gaps and inaccuracies that may be introduced by the encoder during the quantization step. LCT-IV and LCT-II were applied on images as a preprocessing phase followed by the JPEG baseline sequential compression algorithm. For LCT-IV, the DCT type IV replaced the standard DCT as the kernel of the transform coding. In both cases, for the same low bit rates the blocking effect was smoothed and reduced while the image quality in terms of mean-square error became better. Subjective tests performed on a group of observers also confirm these results. Thus the LCT can be added as an optional step for improving the quality of existing DCT (JPEG) encoders. Advantages over other methods that attempt to reduce the blocking effect due to quantization are also described.

Journal ArticleDOI
TL;DR: Dentists' perception of the quality of digitally captured radiographs is evaluated as the majority of dentists preferred a treated image to the original version, and image treatment possibilities should be offered in digital radiography.
Abstract: This study evaluated dentists' perception of the quality of digitally captured radiographs. Thirty radiographs were taken with the Visualix digital video radiographic system, 10 periapicals for tooth and bone anatomy, 10 periapicals for bone disease and 10 bitewings for dental caries. Three numeric copies were made of each image and treated with three different filters: 'optimize', 'enhance' and 'enhance + smooth', respectively. Four images of the same case were displayed simultaneously in a random sequence on the monitor. Twenty dentists ranked each set of four images on a scale from 1 to 4. In general, most dentists preferred a treated image to the original. The optimized and enhanced images were selected most frequently as first or second choice from the tooth and bone anatomy and bone disease groups. The original image was ranked lowest in more than half (55%) of the series. For the bitewings, the smoothed images were ranked significantly higher. In conclusion, image treatment possibilities should be offered in digital radiography as the majority of dentists preferred a treated image to the original version. The image treatment chosen seemed to be task dependent; less treatment was required to delineate the more subtle tissue differences.

Patent
Fujii Akio1
23 Feb 1993
TL;DR: In this article, an apparatus for quantizing conversion data resulted from converting image information into frequency regions and then coding the quantized data, a coefficient to be multiplied by a chrominance quantizing matrix is set depending on a coefficient for which a luminance quantizer is used.
Abstract: The invention is intended to improve image quality in a process of image coding. In an apparatus for quantizing conversion data resulted from converting image information into frequency regions and then coding the quantized data, a coefficient to be multiplied by a chrominance quantizing matrix is set depending on a coefficient to be multiplied by a luminance quantizing matrix for an improvement in color reproducibility.

Journal ArticleDOI
TL;DR: The authors' study is unlike previous studies of the effects of lossy compression in that they consider nonbinary detection tasks, simulate actual diagnostic practice instead of using paired tests or confidence rankings, and use statistical methods that are more appropriate for nonbinary clinical data than are the popular receiver operating characteristic curves.
Abstract: The authors apply a lossy compression algorithm to medical images, and quantify the quality of the images by the diagnostic performance of radiologists, as well as by traditional signal-to-noise ratios and subjective ratings. The authors' study is unlike previous studies of the effects of lossy compression in that they consider nonbinary detection tasks, simulate actual diagnostic practice instead of using paired tests or confidence rankings, use statistical methods that are more appropriate for nonbinary clinical data than are the popular receiver operating characteristic curves, and use low-complexity predictive tree-structured vector quantization for compression rather than DCT-based transform codes combined with entropy coding. The authors' diagnostic tasks are the identification of nodules (tumors) in the lungs and lymphadenopathy in the mediastinum from computerized tomography (CT) chest scans. Radiologists read both uncompressed and lossy compressed versions of images. For the image modality, compression algorithm, and diagnostic tasks the authors consider, the original 12 bit per pixel (bpp) CT image can be compressed to between 1 bpp and 2 bpp with no significant changes in diagnostic accuracy. The techniques presented here for evaluating image quality do not depend on the specific compression algorithm and are useful new methods for evaluating the benefits of any lossy image processing technique. >

Proceedings ArticleDOI
01 Apr 1993
TL;DR: In this paper, the authors survey and give a classification of the criteria for the evaluation of monochrome image quality, including the mean square error (MSE) and mean square errors (SSE).
Abstract: Although a variety of techniques are available today for gray-scale image compression, a complete evaluation of these techniques cannot be made as there is no single reliable objective criterion for measuring the error in compressed images. The traditional subjective criteria are burdensome, and usually inaccurate or inconsistent. On the other hand, being the most common objective criterion, the mean square error (MSE) does not have a good correlation with the viewer's response. It is now understood that in order to have a reliable quality measure, a representative model of the complex human visual system is required. In this paper, we survey and give a classification of the criteria for the evaluation of monochrome image quality.

Patent
23 Dec 1993
TL;DR: In this article, a technique for extracting an image feature, especially a thin image portion of a specific color of an image, and for encoding the image portion with high efficiency to minimize degradation of image quality by selecting an appropriate encoding method is described.
Abstract: This invention relates to an image processing apparatus having a function of encoding or decoding image data and discloses a technique for extracting an image feature, especially a thin image portion of a specific color of an image, and for encoding the thin image portion with high efficiency to minimize degradation of image quality by selecting an appropriate encoding method.


Patent
06 Dec 1993
TL;DR: In this article, a method and apparatus for processing image data produced by an electronic camera by over-sampling and interpolating the data to convert the format and/or resolution of the data is presented.
Abstract: A method and apparatus for processing image data produced by an electronic camera (such as image data read from a CCD device of an electronic camera) by over-sampling and then interpolating the data to convert the format and/or resolution of the data. The format of the image data may be converted to a format suitable for display on a computer monitor or the like. The invention enables the aspect ratio of a frame of image data output from a CCD device to be converted to 1:1. Preferably, the image data are filtered by a filter having a characteristic opposite to the frequency characteristic of the interpolation function, and then interpolated so that the interpolation can be carried out at a portion where the change of the frequency characteristic is small, thereby improving the image quality of the fully processed output image. The invention can be implemented in an electronic camera including an over-sampling filter (4) for over-sampling input data N times and an interpolation circuit (5) for interpolating the output from the over-sampling filter.

Journal ArticleDOI
TL;DR: A physical evaluation of modern PPCR technology and some of the findings relevant to general radiographic applications are reviewed here, including the function of the auto-reader system and the reliability of image reproduction, the radiation exposure requirement and physical image quality.
Abstract: Currently photostimulable phosphor computed radiography (PPCR) promises to be the digital X-ray image acquisition technology of choice for classical radiography (i.e. X-ray examinations of natural anatomy). For the last two years we have been carrying out a physical evaluation of modern PPCR technology and some of our findings relevant to general radiographic applications are reviewed here. Topics covered include the function of the auto-reader system and the reliability of image reproduction, the radiation exposure requirement and physical image quality. The latter is based upon both objective and subjective measures of image quality. These studies have yielded a favourable comparison of the image quality of modern PPCR technology with that of medium-speed and fast radiographic screen-film combinations. The major advantages of PPCR appear to be the maintenance of high imaging efficiency (DQE) over a much wider range of signal levels than film and consistent image acquisition and presentation ind...