scispace - formally typeset
Search or ask a question

Showing papers on "Image quality published in 1989"


Journal ArticleDOI
TL;DR: The ANALYZE software system, which permits detailed investigation and evaluation of 3-D biomedical images, is discussed, which is unique in its synergistic integration of fully interactive modules for direct display, manipulation, and measurement of multidimensional image data.
Abstract: The ANALYZE software system, which permits detailed investigation and evaluation of 3-D biomedical images, is discussed. ANALYZE can be used with 3-D imaging modalities based on X-ray computed tomography, radionuclide emission tomography, ultrasound tomography, and magnetic resonance imaging. The package is unique in its synergistic integration of fully interactive modules for direct display, manipulation, and measurement of multidimensional image data. One of the most versatile and powerful capabilities in ANALYZE is image volume rendering for 3-D display. An important advantage of this technique is that it can be used to display 3-D images directly from the original data set and to provide on-the-fly combinations of selected image transformations, such as surface segmentation, cutting planes, transparency, and/or volume set operations (union, intersection, difference, etc.). The module has been optimized to be fast (interactive) without compromising image quality. The software is written entirely in C and runs on standard UNIX workstations. >

366 citations


Journal Article
TL;DR: Results from a heart-lung phantom study and a 201Tl patient study demonstrated that the iterative EM algorithm with attenuation correction provided improved image quality in terms of reduced streak artifacts and noise, and more accurate quantitative information in Terms of improved radioactivity distribution uniformity where uniformity existed, and better anatomic object definition.
Abstract: Correction for photon attenuation in cardiac SPECT imaging using a measured attenuation distribution with an iterative expectation maximization (EM) algorithm and an iterative Chang algorithm were compared with the conventional filtered backprojection and an iterative EM algorithm without attenuation correction. The attenuation distribution was determined from a transmission computed tomography study that was obtained using an external collimated sheet source. The attenuation of the emitting photons was modeled in the EM algorithm by an attenuated projector-backprojector that used the estimated attenuation distribution to calculate attenuation factors for each pixel along each projection and backprojection ray. Results from a heart-lung phantom study and a 201Tl patient study demonstrated that the iterative EM algorithm with attenuation correction provided improved image quality in terms of reduced streak artifacts and noise, and more accurate quantitative information in terms of improved radioactivity distribution uniformity where uniformity existed, and better anatomic object definition.

256 citations


Journal ArticleDOI
TL;DR: First data on long-term drift of results and effects of changes in patient composition show the new method to be superior to present radionuclide systems, likely that this new method will become the standard for bone density measurements.
Abstract: A recently introduced method (dual-photon X-ray absorptiometry, DEXA) capable of measuring skeletal density in man (at present in the spine and hips, but ultimately for the whole body) has been evaluated in terms of its ability to perform long-term assessment of bone density changes. The method, which uses X rays rather than gamma rays as its photon source, represents a significant improvement over present systems both in image quality and precision (reproducibility) of results, which is better than 1% in vivo. Scanning time is approximately halved compared with present techniques and the radiation dose is reduced by 25%. First data on long-term drift of results and effects of changes in patient composition (i.e. thickness and fat content) are given and show the new method to be superior to present radionuclide systems. It is likely that this new method will become the standard for bone density measurements.

208 citations


Journal ArticleDOI
TL;DR: This work presents a new iterative algorithm that holds promise of being a robust estimator and corrector for arbitrary phase errors and demonstrates its ability to focus scenes containing large amounts of phase error regardless of the phase-error structure or its source.
Abstract: Uncompensated phase errors present in synthetic-aperture-radar data can have a disastrous effect on reconstructed image quality. We present a new iterative algorithm that holds promise of being a robust estimator and corrector for arbitrary phase errors. Our algorithm is similar in many respects to speckle processing methods currently used in optical astronomy. We demonstrate its ability to focus scenes containing large amounts of phase error regardless of the phase-error structure or its source. The algorithm works extremely well in both high and low signal-to-clutter conditions without human intervention.

198 citations


Journal ArticleDOI
TL;DR: A prototype model of a video codec was developed that demonstrates the feasibility of both variable bit rate (VBR) coding and user-selectable picture quality.
Abstract: The bandwidth flexibility offered by the asynchronous transfer mode (ATM) technique makes it possible to select picture quality and bandwidth over a wide range in a simple and straightforward manner. A prototype model of a video codec was developed that demonstrates the feasibility of both variable bit rate (VBR) coding and user-selectable picture quality. The VBR coding algorithm is discussed and it is shown how a stabilized quality is achieved and how this quality and associated bandwidth can be selected by the user. How error propagation is limited to reduce the visibility of cell losses is also discussed. Interfaces with the ATM network are analyzed, with emphasis on decoder synchronization and absorption of cell delay jitter. The VBR codec offers very good picture quality for videophony applications at an equivalent load of 5.9 Mb/s. Picture quality remains relatively constant, even for heavy motion. >

137 citations


04 Jun 1989

119 citations


Journal ArticleDOI
TL;DR: A preliminary image quality measure that takes into account two major sensitivities of the human visual system (HVS) is described and allows experimentation with numerous parameters of the HVS model to determine the optimum set for which the highest correlation with subjective evaluations can be achieved.
Abstract: A preliminary image quality measure that takes into account two major sensitivities of the human visual system (HVS) is described. The sensitivities considered are background illumination level and spatial frequency sensitivities. Given a digitized monochrome image, the algorithm produces, among some other figures of merit, a plot of the information content (IC) versus the resolution in units of pixels. The IC is defined here as the sum of the weighted spectral components at an arbitrary specified resolution. The HVS normalization is done by first intensity remapping the image by a monotonically increasing function representing the background illumination level sensitivity, followed by a spectral filtering to compensate for the spatial frequency sensitivity. The developed quality measure is conveniently parameterized and interactive. It allows experimentation with numerous parameters of the HVS model to determine the optimum set for which the highest correlation with subjective evaluations can be achieved. The preliminary results are promising.

116 citations


Patent
19 Jul 1989
TL;DR: In this article, an adaptive transform coding algorithm for a still image is proposed, where the image is divided into small blocks of pixels and each block of pixels is transformed using an orthogonal transform such as a discrete cosine transform.
Abstract: In accordance with our adaptive transform coding algorithm for a still image, the image is divided into small blocks of pixels and each block of pixels is transformed using an orthogonal transform such as a discrete cosine transform. The resulting transform coefficients are compressed and coded to form a bit stream for transmission to a remote receiver. The compression parameters for each block of pixels are chosen based on a busyness measure for the block such as the magnitude of the (K+1) th most significant transform coefficient. This enables busy blocks for which the human visual system is not sensitive to degradation to be transmitted at low bit rates while enabling other blocks for which the human visual system is sensitive to degradation to be transmitted at higher bit rates. Thus, the algorithm is able to achieve a tradeoff between image quality and bit rate.

94 citations


Proceedings ArticleDOI
23 May 1989
TL;DR: An adaptive transform coding algorithm using a quadtree-based variable blocksize DCT (discrete cosine transform) is introduced to achieve a better tradeoff between bit rate and image quality.
Abstract: An adaptive transform coding algorithm using a quadtree-based variable blocksize DCT (discrete cosine transform) is introduced to achieve a better tradeoff between bit rate and image quality. The choice of appropriate blocksize is determined by a mean-based decision rule that can discriminate various image contents for better visual quality. Some simulation results are given. It is found that the same or better image quality can be obtained with lower average bit rate. >

79 citations


Journal ArticleDOI
TL;DR: Improvements in image quality are described, and new variations in the echo‐planar pulse sequence which provide better contrast and allow separate imaging of water and fat distributions are presented.
Abstract: Echo-planar imaging using a magnetic field strength of 0.5 T has resulted in an improvement in image quality compared with recent images published at 0.1 T. The sensitivity of the technique to main magnetic field inhomogeneity and transient eddy currents has necessitated innovations in gradient and radiofrequency coil design. These improvements are described, and new variations in the echo-planar pulse sequence which provide better contrast and allow separate imaging of water and fat distributions are presented. © 1989 Academic Press, Inc.

72 citations


Journal ArticleDOI
TL;DR: The application of on-line portal imaging techniques to the verification of treatment precision is reviewed, and the optimization of image quality is discussed with particular emphasis on photon noise.

Proceedings ArticleDOI
15 Aug 1989
TL;DR: A model for the perception of distortions in pictures that consists of an adaptive input stage realized as a ROG (Ratio of Gaussian) pyramid, and a further decomposition by orientation selective filters including a saturating nonlinearity acting at each point of the filter outputs.
Abstract: A model for the perception of distortions in pictures is suggested. It consists of two main parts: an adaptive input stage realized as a ROG (Ratio of Gaussian) pyramid also suited for applications in image coding and computer vision, and a further decomposition by orientation selective filters including a saturating nonlinearity acting at each point of the filter outputs. The output values for each point of each filter are regarded as feature vector of the internal representation of the input picture. The difference between the internal representations of original and distorted picture is evaluated as norm of the difference vector. Due to local nonlinearities this operation explains periodic and aperiodic masking effects.

Proceedings ArticleDOI
15 Aug 1989
TL;DR: This work describes a model of the CSF that includes changes as a function of image noise level by using the concepts of internal visual noise, and tests this model in the context of image compression with an observer study.
Abstract: The visual contrast sensitivity function (CSF) has found increasing use in image compression as new algorithms optimize the display-observer interface in order to reduce the bit rate and increase the perceived image quality. In most compression algorithms, increasing the quantization intervals reduces the bit rate at the expense of introducing more quantization error, a potential image quality degradation. The CSF can be used to distribute this error as a function of spatial frequency such that it is undetectable by the human observer. Thus, instead of being mathematically lossless, the compression algorithm can be designed to be visually lossless, with the advantage of a significantly reduced bit rate. However, the CSF is strongly affected by image noise, changing in both shape and peak sensitivity. This work describes a model of the CSF that includes these changes as a function of image noise level by using the concepts of internal visual noise, and tests this model in the context of image compression with an observer study.

Journal ArticleDOI
TL;DR: This paper introduces a reduced-difference pyramid data structure in which the number of nodes, corresponding to a set of decorrelated difference values, is exactly equal to thenumber of pixels.
Abstract: Pyramid data structures have found an important role in progressive image transmission. In these data structures, the image is hierarchically represented, with each level corresponding to a reduced-resolution approximation. To achieve progressive image transmission, the pyramid is transmitted starting from the top level. However, in the usual pyramid data structures, extra significant bits may be required to accurately record the node values, the number of data to be transmitted may be expanded, and the node values may be highly correlated. In this paper, we introduce a reduced-difference pyramid data structure in which the number of nodes, corresponding to a set of decorrelated difference values, is exactly equal to the number of pixels. Experimental results demonstrate that the reduced-difference pyramid results in lossless progressive image transmission with some degree of compression. By use of an appropriate interpolation method, reasonable quality approximations are achieved at a bit rate less than 0.1 bit/pixel and excellent quality approximations at a bit rate of about 1.3 bits/pixel.

Journal ArticleDOI
TL;DR: Simulation testing of the maximum likelihood based iterative algorithm adapted from nuclear medicine imaging for noncoherent optical imaging and results of a simulation in restoring missing-cone information for 3-D imaging show the feasibility of using these methods with real systems.
Abstract: A maximum likelihood based iterative algorithm adapted from nuclear medicine imaging for noncoherent optical imaging was presented in a previous publication with some initial computer-simulation testing. This algorithm is identical in form to that previously derived in a different way by W. H. Richardson , “ Bayesian-Based Iterative Method of Image Restoration,” J. Opt. Soc. Am.62, 55– 59 ( 1972) and L. B. Lucy , “ An Iterative Technique for the Rectification of Observed Distributions,” Astron. J.79, 745– 765 ( 1974). Foreseen applications include superresolution and 3-D fluorescence microscopy. This paper presents further simulation testing of this algorithm and a preliminary experiment with a defocused camera. The simulations show quantified resolution improvement as a function of iteration number, and they show qualitatively the trend in limitations on restored resolution when noise is present in the data. Also shown are results of a simulation in restoring missing-cone information for 3-D imaging. Conclusions are in support of the feasibility of using these methods with real systems, while computational cost and timing estimates indicate that it should be realistic to implement these methods. It is suggested in the Appendix that future extensions to the maximum likelihood based derivation of this algorithm will address some of the limitations that are experienced with the nonextended form of the algorithm presented here.

Journal ArticleDOI
TL;DR: An image synthesizing technique called Synthevision is described, in this approach to image synthesis, a line NTSC camera foreground picture is synchronously keyed into a background picture which is digitally processed from a wide Hi-Vision image.
Abstract: An image synthesizing technique called Synthevision is described. In this approach to image synthesis, a line NTSC camera foreground picture is synchronously keyed into a background picture which is digitally processed from a wide Hi-Vision image. This background-derived picture is controlled by a computer using data from the foreground camera. If the camera image is altered by actions of zooming, panning, and focusing, the background picture is altered accordingly through the use of a newly developed digital image processor. The combined image thus exhibits a far greater realism than conventional chroma-key imaging. Synthevision is currently used at NHK for the evening news to change the background for each segment. >

Book ChapterDOI
02 Oct 1989
TL;DR: The acquisition of volume and/or multiecho data, flow measurements, and greater sensitivity offer new possibilities for diagnosis, therapy and operation planning in magnetic resonance imaging.
Abstract: Recent advances in magnetic resonance imaging (MRI) show substantial improvement of image quality and acquisition speed. The acquisition of volume and/or multiecho data, flow measurements,and greater sensitivity offer new possibilities for diagnosis, therapy and operation planning.

Patent
29 Mar 1989
TL;DR: In this article, an improved solid-state imaging device having pixel amplifiers was proposed. But the pixel amplifier was used to suppress a voltage drop of the power supply line and compensating fluctuations in outputs of the pixel amplifier.
Abstract: The present invention relates to an improved solid-state imaging device having pixel amplifiers. The higher definition of the device results in the increase in number of pixels as large as not less than two million. When a solid-state imaging device having such a large number of pixels is provided with pixel amplifiers, there arise various problems associated with a power source and a power supply line as well as a problem inherent to the pixel amplifier type of solid-state imaging device. The present invention provides a solid-state imaging device in which noises or the like are prevented and a picture or image quality having high definition can be obtained, by suppressing a voltage drop of the power supply line and by compensating fluctuations in outputs of the pixel amplifiers.

Patent
06 Mar 1989
TL;DR: In this paper, an image processing apparatus which handles an image as a digital signal, comprising: an input device to input image data indicative of a concentration of an image; a binarization circuit to binarize the input data; a positive/negative state detection circuit to detect whether error data generated when the image data is binarized by the binarisation circuit is in a positive state or a negative state; and a selector to select whether the output data generated upon binarizing is to be corrected or not, on the basis of the positive or negative state of
Abstract: There is provided an image processing apparatus which handles an image as a digital signal, comprising: an input device to input image data indicative of a concentration of an image; a binarization circuit to binarize the input image data; a positive/negative state detection circuit to detect whether error data generated when the image data is binarized by the binarization circuit is in a positive state or a negative state; and a selector to select whether the error data generated upon binarization is to be corrected or not, on the basis of the positive or negative state of the error data. The input device has a generator to read an original and generate an analog image signal and a converter to convert the analog image signal into the digital image data. The error data is the difference between the input image data and the binary data produced by the binarization circuit. With this apparatus, an image of a good picture quality can be obtained by improving the error diffusion method as a halftone processing method. Even when portions of high and low concentrations in an original are very close to each other, the blanking phenomenon, wherein no dot is printed in the low concentration area near the boundary between those portions, and the consequent problem that the reproduced image lacks its proper content can be prevented.

Journal ArticleDOI
TL;DR: In this article, a real-time arithmetic image processor was used in an electro-optic holography system to combine an image of an object, lit by laser light, with a mutually coherent reference beam.
Abstract: This paper reports the use of a real-time arithmetic image processor in an electro-optic holography system. A speckle interferometer is used to combine an image of an object, lit by laser light, with a mutually coherent reference beam. A CCD TV camera detects the interference pattern, and the phase of the reference beam is advanced by 90° between frames. An image is generated from each set of four sequential TV frames by subtracting alternate frames, squaring, and adding the two results. The result is improved picture quality compared with the use of binary pixels and compared with electronic speckle pattern interferometry. Experimental results are shown.

Journal ArticleDOI
TL;DR: An optical model for imaging the retina through cataracts has been developed and a homomorphic Weiner filter can be designed that will optimally restore the cataractous image (in the mean-square-error sense).
Abstract: An optical model for imaging the retina through cataracts has been developed. The images are treated as sample functions of stochastic processes. On the basis of the model a homomorphic Weiner filter can be designed that will optimally restore the cataractous image (in the mean-square-error sense). The design of the filter requires a priori knowledge of the statistics of either the cataract transmittance function or the noncataractous image. The cataract transmittance function, assumed to be low pass in nature, can be estimated from the cataractous image of the retina. The statistics of the noncataractous image can be estimated using an old, precataractous photograph of the same retina, which is frequently available. Various modes of this restoration concept were applied to clinical photographs and found to be effective. The best results were obtained with short-space enhancement using averaged short-space estimates of the spectra of the two images. >

Proceedings ArticleDOI
01 Nov 1989
TL;DR: In this article, the filtering of noise in image sequences using spatio-temporal motion compensated techniques is considered, and a number of filtering techniques are proposed and compared in this work.
Abstract: In this paper the filtering of noise in image sequences using spatio-temporal motion compensated techniques is considered. Noise in video signals degrades both the image quality and the performance of subsequent image processing algorithms. Although the filtering of noise in single images has been studied extensively, there have been few results in the literature on the filtering of noise in image sequences. A number of filtering techniques are proposed and compared in this work. They are grouped into recursive spatio-temporal and motion compensated filtering techniques. A 3-D point estimator which is an extension of a 2-D estimator due to Kak [5] belongs in the first group, while a motion compensated recursive 3-D estimator and 2-D estimators followed by motion compensated temporal filters belong in the second group. The motion in the sequences is estimated using the pel-recursive Wiener-based algorithm [8] and the block-matching algorithm. The methods proposed are compared experimentally on the basis of the signal-to-noise ratio improvement and the visual quality of the restored image sequences.

Patent
01 Dec 1989
TL;DR: In this paper, a compression encoder uses an arbitrary normalization coefficient and a preset table to achieve a compression of image signal with a desired compression rate, and a reproducing apparatus achieves the image signal decoding and reproducing operations by using the normalized data and the Huffman-encoded data.
Abstract: A compression encoder uses an arbitrary normalization coefficient and a preset table to achieve a compression of image signal with a desired compression rate. Since the normalization coefficient and the table data are sent to an image signal decoding and reproducing apparatus together with the compressed image data, the reproducing apparatus can restore the original image from those data items. Furthermore, in the encoder, when an amplitude value of the data exceeds a predetermined value, an overflow sensor means the condition so as to produce normalized data in addition to the Huffman-encoded data. The reproducing apparatus achieves the image signal decoding and reproducing operations by use of the normalized data and the Huffman-encoded data. With these apparatuses, the picture quality can be prevented from being lowered due to an overflow in the encoding operation.

Journal ArticleDOI
TL;DR: In both phantoms and rabbit brains in vivo motion artifacts were found to be reducible by averaging 8‐16 images and the resulting image contrast no longer represents a “true” diffusion contrast but is affected by additional signal losses due to motion averaging.
Abstract: Severe motion and flow artifacts are a problem in MRI of diffusion in vivo due to the application of strong magnetic field gradients. Here it is shown that image artifacts can be removed by using a modified fast-scan MRI sequence (CE-FAST) in conjunction with averaging of diffusion-weighted images. In phantom studies slow (coherent) flow (<1 mm s-1) in the presence of strong diffusion gradients is shown to cause signal losses in diffusion-weighted images that depend on the relative orientations of the flow direction and the diffusion gradient. On the other hand, pulsatile motions of macroscopic dimensions (e.g. 1 mm, 1 Hz, in-plane) lead to smearing and ghosting of signal intensities along the phase-encoding direction of the images. In both phantoms and rabbit brains in vivo motion artifacts were found to be reducible by averaging 8-16 images. Unfortunately. the resulting image contrast no longer represents a “true” diffusion contrast but is affected by additional signal losses due to motion averaging. All experiments were performed on a 40-cm-borc 2.35-T Bruker Medspee system. © 1989 Academic Press, Inc.

Journal ArticleDOI
TL;DR: The main influences were found to be observer error in marking co-ordinates, scaling of the image presented by the computer's monitor, distortion caused by out-of-plane images and loss of image quality as a result of scattered radiation from the soft tissues.
Abstract: The kinematic behaviour of the vertebral segments under the influence of spinal injury and other mechanical problems is difficult to quantify in patients. This paper describes the we of a calibration model and human subjects to investigate the accuracy of a method for determining lumbar intervertebral rotations using images digitized from an image intensifier. The main influences were found to be observer error in marking co-ordinates, scaling of the image presented by the computer's monitor, distortion caused by out-of-plane images and loss of image quality as a result of scattered radiation from the Soft tissues. The technique may be valuable in the light of its efficiency and low X-ray exposure to patients.

Journal Article
TL;DR: The performance of a new scintillation camera, designed for high event rate capability, was evaluated, indicating that this camera design does not compromise image quality at normal clinical count rates and at higher event rates can provide better image quality and increased sensitivity over many Anger cameras currently employed in nuclear medicine.
Abstract: The performance of a new scintillation camera, designed for high event rate capability, was evaluated. The system consisted of a 400 mm field-of-view Nal(T1) camera with 61 photomultiplier tubes and modified General Electric Starport electronics. A significant feature of the system was circuitry for performing pulse tail extrapolation and separation of individual pulses involved in pulse pile-up events. System deadtime, flood field uniformity, energy resolution, linearity, spatial resolution, and bar phantom image quality were evaluated for count rates up to 200 kcps in a 20% photopeak window. Our results indicate that this camera design does not compromise image quality at normal clinical count rates and at higher event rates can provide better image quality and increased sensitivity over many Anger cameras currently employed in nuclear medicine.

Proceedings ArticleDOI
25 May 1989
TL;DR: In this new approach, the registration variables are de-coupled, resulting in a much less computationally expensive algorithm and the performance of the new technique is demonstrated in the matching of MRI and PET scans, and in an application of pattern recognition in linear accelerator images.
Abstract: Automated image matching has important applications, not only in the fields of machine vision and general pattern recognition, but also in modern diagnostic and therapeutic medical imaging. Image matching, including the recognition of objects within images as well as the combination of images that represent the same object or process using different descriptive parameters, is particularly important when complementary physiological and anatomical images, obtained with different imaging modalities, are to be combined. Correlation analysis offers a powerful technique for the computation of translational, rotational and scaling differences between the image data sets, and for the detection of objects or patterns within an image. Current correlation-based approaches do not efficiently deal with the coupling of the registration variables, and thus yield iterative and computationally-expensive algorithms. A new approach is presented which improves on previous solutions. In this new approach, the registration variables are de-coupled, resulting in a much less computationally expensive algorithm. The performance of the new technique is demonstrated in the matching of MRI and PET scans, and in an application of pattern recognition in linear accelerator images.

Patent
22 Nov 1989
TL;DR: In this paper, the authors proposed an image signal encoding apparatus for encoding a signal in which the image of one picture plane consisting of a plurality of pixel data are divided into a plurality blocks each consisting of an arbitrary number of pixels, and a pair of reference value data regarding the maximum and minimum values of the pixel data constructing the block are formed.
Abstract: An image signal encoding apparatus for encoding an image signal in which image of one picture plane consisting of a plurality of pixel data are divided into a plurality of blocks each consisting of a predetermined number of pixel data. For each of the blocks, a pair of reference value data regarding the maximum and minimum values of the levels of the pixel data constructing the block are formed. On the basis of the formed reference value data, encoded date is formed by encoding each of the pixel data of the block. Decoded data is formed by decoding the encoded data formed on the basis of the reference value data. By comparing each of the pixel data of the block with the decoded data, errors are detected. The reference value data are corrected in accordance with the result of the detection. Thus, the encoding errors are reduced and it is possible to perform the encoding with less deterioration in picture quality of the image signal.

Journal ArticleDOI
TL;DR: It is shown that real-time reconstruction is feasible using the concepts of parallel processing and that a special multisegment sensor results in a significant improvement in signal-to-noise ratio and image quality and that the reconstructed image benefits from the concurrent activation of multiple receivers per transmitted pulse.
Abstract: An evaluation of the application of a parallel-processing array to the measurement of two-phase flow, such as bubbly oil flow through a pipe, in real-time is described. Pulse-echo ultrasound tomography is used to generate a cross-sectional image of the flow that forms the basis for the deduction of flow parameters, such as the void fraction. The tomographic algorithm used is backprojection adapted for execution on an array of parallel-processing devices. It is shown that real-time reconstruction is feasible using the concepts of parallel processing. Different sensor arrangements were investigated by computer simulation. It is shown that a special multisegment sensor results in a significant improvement in signal-to-noise ratio and image quality and that the reconstructed image benefits from the concurrent activation of multiple receivers per transmitted pulse. The findings may also be useful for nondestructive testing and medical applications. >

Journal ArticleDOI
TL;DR: In this article, the so-called square root integral (SQRI) is evaluated to describe the effect of picture size on subjective image quality. But it is not shown how to find an optimum display size or viewing distance for a given number of displayed pixels.
Abstract: The so-called square root integral (SQRI), which describes the effect of resolution on perceived image quality, is further evaluated to describe the effect of picture size on subjective image quality. This is possible by taking the effect of display size into account in the modulation threshold function of the eye, appearing in the integrand of the SQRI. In this way an excellent correlation is found between subjective image quality and calculated SQRI value for recent measurements by J.H.D.M. Westerink and J.A.J. Roufs (SID Dig., vol.19, p.360-3, May 1988). Data indicate that there is an optimum display size or an optimum viewing distance for a given number of displayed pixels. The optimum conditions can be calculated with the aid of the SQRI. >