scispace - formally typeset
Search or ask a question

Showing papers on "Image quality published in 1980"


Journal ArticleDOI
TL;DR: Two-dimensional image moments with respect to Zernike polynomials are defined, and it is shown how to construct an arbitrarily large number of independent, algebraic combinations of zernike moments that are invariant to image translation, orientation, and size as discussed by the authors.
Abstract: Two-dimensional image moments with respect to Zernike polynomials are defined, and it is shown how to construct an arbitrarily large number of independent, algebraic combinations of Zernike moments that are invariant to image translation, orientation, and size. This approach is contrasted with the usual method of moments. The general problem of two-dimensional pattern recognition and three-dimensional object recognition is discussed within this framework. A unique reconstruction of an image in either real space or Fourier space is given in terms of a finite number of moments. Examples of applications of the method are given. A coding scheme for image storage and retrieval is discussed.

2,362 citations


Journal ArticleDOI
TL;DR: A coded aperture camera is similar to a pinhole camera in that no reflecting or refracting optics are used to form an image as mentioned in this paper, however, it is different due to the fact that the single opening of the conventional pinhole cameras has been replaced with a pattern of holes which may number in the hundreds or thousands.
Abstract: A coded aperture camera is similar to a pinhole camera in that no reflecting or refracting optics are used to form an image. However, it is different due to the fact that the single opening of the conventional pinhole camera has been replaced with a pattern of holes which may number in the hundreds or thousands. The resulting image is the superposition of many individual pinhole images and must be unscrambled to be meaningful. The advantage of the coded aperture is that many more photons are collected in a given period of time, and hence the resulting decoded image is of higher quality than if a conventional single pinhole camera were used. An additional advantage lies in the ability to obtain three-dimensional depth information from a single view. This paper presents an overview of past coded aperture imaging efforts which include the use of Fresnel zone plates, random arrays, and nonredundant pinhole arrays. The theory underlying their use is presented as well as a review of the digital and optical methods used in decoding. Recent important advances in the field, such as the use of uniformly redundant arrays, are presented. Typical results from some of the above-mentioned systems are shown and discussed.

82 citations


Book ChapterDOI
01 Jan 1980

63 citations


Journal ArticleDOI
TL;DR: The problems inherent in the design and manufacture of mammographic test objects are discussed and a test object is described which may be used for assessing image quality.
Abstract: The problems inherent in the design and manufacture of mammographic test objects are discussed and a test object is described which may be used for assessing image quality. Some typical results from both good and bad mammographic machine/film combinations are presented.

32 citations


BookDOI
01 Jan 1980
TL;DR: A Parallel Processing System for the Three-Dimensional Display of Serial Tomograms and a proposed method for Faster Synchronization are presented.
Abstract: General.- Towards an Image Analysis Center for Medicine.- 1. Introduction.- 2. The Interactive Image Analysis System.- 2.1 The Input Units.- 2.2 The Image Display System.- 2.3 The Computer System.- 3. The Computerized Microscope.- 3.1 The Input System.- 3.2 The Computer System.- 3.3 The Microscope Stage Controller.- 4. Discussion.- 5. Acknowledgements.- 6. References.- Cellular Computers and Biomedical Image Processing.- 1. Introduction.- 2. Cellular Computers.- 2.1 The von Neumann Cellular Automaton.- 2.2 Cellular Automata and Image Processing.- 2.3 Pipeline Image Processor.- 3. Cellular Computers and Image Processing Research.- 3.1 Programming the Cellular Computer.- 3.2 Research Status.- 4. Biomedical Applications of Cellular Digital Image Processing.- 4.1 Coronary Artery Disease.- 4.2 Tissue Growth and Development.- 4.3 Genetic Mutagen Studies.- 5. References.- Radiology.- Ultra High-Speed Reconstruction Processors for X-Ray Computed Tomography of the Heart and Circulation.- 1. Introduction.- 2. Computation and Display.- 2.1 The Cone Beam Geometry.- 2.2 Choice of a Suitable Reconstruction Algorithm.- 2.3 Filtration Methods.- 2.4 Reconstruction Processor Design.- 2.5 Back-Projection Implementations.- 3. Flexibility of the High Speed Parallel Processing Reconstruction Architecture.- 4. Acknowledgements.- 5. References.- Computer Analysis of the Ultrasonic Echocardiogram.- 1. Introduction.- 2. System Configuration.- 3. Processing of Ultrasonic Echocardiogram.- 4. Clinical Results.- 5. Conclusions.- 6. Acknowledgements.- 7. References.- Toward Computed Detection of Nodules in Chest Radiographs.- 1. Introduction.- 2. Materials and Methods.- 2.1 Preprocessor.- 2.2 Circularity Detector.- 2.3 Boundary Follower.- 2.4 Classifier.- 3. Experimental Results.- 4. Conclusions.- 5. Acknowledgements.- 6. References.- A Parallel Processing System for the Three-Dimensional Display of Serial Tomograms.- 1. Introduction.- 2. System Outline.- 2.1 Tomograms.- 2.2 Functions.- 3. System Architecture.- 4. Image Input Device.- 5. Image Processing.- 5.1 Reduction of Size.- 5.2 Smoothing.- 5.3 Binarization.- 5.4 Differentiation.- 5.5 Rotation.- 6. Three-Dimensional.- 7. Illustrative Experiment.- 8. Concluding Remarks.- 9. Acknowledgements.- 10. References.- Dynamic Computed Tomography for the Heart.- 1. Introduction.- 2. Dynamic Scanner.- 2.1 CT Images Using the Dynamic Scanner.- 2.2 ECG Gated Image Using the Dynamic Scanner.- 2.3 ECG Phase Differentiation Method.- 2.4 Comparison of Methods.- 3. Proposed Method for Faster Synchronization.- 3.1 Multilens Method.- 3.2 Rocking Fan Beam Method.- 4. Conclusion.- 5. References.- Efficient Analysis of Dynamic Images Using Plans.- 1. Introduction.- 2. Hardware System.- 3. Analysis of Heart Wall Motion in Cine-Angiograms.- 3.1 Plan-Guided Analysis of Thickness of the Heart Wall.- 3.2 Input Pictures.- 3.3 Planning for Feature Extraction.- 3.4 Efficient Heuristic Search for Smooth Boundaries.- 3.5 Selection of Frames for Analysis.- 3.6 Analysis of Consecutive Frames.- 3.7 Measurement of Wall Thickness.- 4. Conclusion.- 5. Acknowledgements.- 6. References.- Real-Time Image Processing in CT-Convolver and Back Projector.- 1. Introduction.- 2. Image Processing in CT.- 3. System Configuration.- 4. High Speed Processor.- 4.1 Convolution.- 4.2 Back Projection.- 4.3 Processing Time.- 5. Display System.- 6. References.- Histology and Cytology.- Detection of the Spherical Size Distribution of Hormone Secretory Granules from Electron Micrographs.- 1. Introduction.- 2. Materials and Methods.- 2.1 Manual Method for Size Distribution Analysis.- 2.2 Computer Method for Size Distribution Analysis.- 2.3 Analysis.- 3. Results.- 4. Discussion.- 5. Conclusion.- 6. References.- The Abbott Laboratories ADC-500T.M..- 1. Introduction.- 1.1 Rationale.- 1.2 Rate-Limiting Factors.- 2. Sample Preparation.- 2.1 The Spinner.- 2.2 The Stainer.- 3. Real-Time Blood Cell Image Analysis.- 3.1 The Computer-Controlled Microscope.- 3.2 Cell Acquisition.- 3.3 High Resolution Data Analysis.- 3.4 Cell Classification.- 3.5 System Timing.- 3.6 Review.- 4. Results.- 5. Summary and Conclusion.- 6. Future Trends.- 7. References.- An Approach to Automated Cytotoxicity Testing by Means of Digital Image Processing.- 1. Introduction.- 2. The Principle of the Lymphocytotoxicity Test.- 3. Description of Our Instrument.- 3.1 Image Input Device.- 3.2 Autofocusing Algorithm.- 3.3 Thresholding.- 3.4 Binary Pattern Matching.- 3.5 Cell Counting.- 4. Important Parameters to Determine Positivity.- 4.1 Cell Size.- 4.2 Density.- 4.3 Halo.- 5. Experimental Results.- 6. Summary.- 7. Acknowledgements.- 8. References.- The diff3T.M. Analyzer: A Parallel/Serial Golay Image Processor.- 1. Introduction.- 2. Functional Organization.- 3. Automated Microscope.- 3.1 Optics.- 3.2 Optical Bench.- 4. Golay Image Processor.- 4.1 Golay Logic Processor (GLOPR).- 4.2 Application of GLOPR.- 5. Summary.- 6. References.- Computer-Aided Tissue Stereology.- 1. Introduction.- 2. Analysis of Muscle Biopsies.- 3. Analysis of the Placenta.- 4. Analysis of Adipose Tissue.- 5. Computer-Aided Data Capture.- 6. Choice of Method.- 7. Acknowledgements.- 8. References.- Interactive System for Medical Image Processing.- 1. Introduction.- 2. Using the SUPRPIC Interactive Image Processing System.- 2.1 Cellular Logic for Image Processing.- 2.2 The Command Structure of SUPRPIC.- 3. Examples of Using SUPRPIC.- 3.1 Digitizing the Image.- 3.2 Determination of Tissue Image Architecture.- 4. Summary and Conclusions.- 5. References.- Real-Time Image Processing in Automated Cytology.- 1. Introduction.- 2. System Design and Specifications.- 2.1 System Design.- 2.2 System Specifications.- 3. Image Input.- 3.1 Image Input System Design.- 3.2 Image Sensor Module.- 3.3 Screening Module.- 3.4 Automated Focusing Module.- 4. Feature Extraction Method.- 4.1 Feature Extraction Hardware.- 4.2 Feature Extraction Procedure.- 5. Processing Sequence.- 6. Conclusion.- 7. References.- The Development of a New Model Cyto-Prescreener for Cervical Cancer.- 1. Introduction.- 2. Fundamental Ideas.- 2.1 Improvement of Diagnostic Performance.- 2.2 Improvement of Processing Speed.- 2.3 Automatic Slide Preparation.- 3. Improvement of Diagnostic Performance.- 3.1 Improvement of Cell Image Quality.- 3.2 Addition of Morphological Feature Parameters.- 3.3 Improvement of Preprocessing Program for Feature Extraction.- 3.4 Increase in Number of Cells Analyzed.- 4. Hardware Implementation.- 4.1 High Resolution TV Camera.- 4.2 Automatic Focusing Mechanism.- 4.3 High Speed Image Processor.- 4.4 Flexible Controllers.- 4.5 Automatic Smear Preparation.- 5. Conclusion.- 6. Acknowledgement.- 7. References.- Author Index.

20 citations


Proceedings ArticleDOI
28 May 1980
TL;DR: In this paper, an introduction to the computer image processing and recognition techniques applied for accurately locating an object is presented and some experimental examples are presented which illustrate some practical solutions to the problem.
Abstract: An introduction to the computer image processing and recognition techniques applied for accurately locating an object are presented in this paper. The accurate measurement of three dimensional position requires a camera calibration process as well as the determination of corresponding image points in two images. The accuracy of the three dimensional measurement depends upon the accuracy of the image matching solution. Since there are a variety of image matching techniques, the pattern recognition guidelines are reviewed which indicate that the optimum features are nonlinear, a posteriori probabilities of the measurements. These optimum features also maximize the trace of the between class scattermatrix normalized by the mixture scattermatrix. However, the theoretical guidelines do not indicate simple measurement methods for the optimum features. Therefore, some experimental examples are presented which illustrate some practical solutions to the problem.© (1980) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

20 citations


Journal ArticleDOI
TL;DR: The relationship between perceived image quality and measurable performance parameters of an intensified fluoroscopic image, viewed via a TV monitor or recorded on 105mm film, was investigated and general agreement was found between quantitative measurements and radiologists' perceptions of image quality.
Abstract: The relationship between perceived image quality and measurable performance parameters of an intensified fluoroscopic image, viewed via a TV monitor or recorded on 105mm film, was investigated. Four specially manufactured image tubes, differing significantly in x-ray absorption efficiency, spatial resolution, and/or contrast resolution, were studied. Quantitative measurements of tube performance included the conversion factor, the quantum detection efficiency, the limiting resolution, the contrast ratio, and the contrast-detail characteristics. An assessment of the quality of clinical images was made by two radiologist-observers, working independently and without knowledge of any quantitative results. The observers were asked to rate the noise, lag, resolution, and contrast of the images during a variety of fluoroscopic procedures on each tube. While general agreement was found between the quantitative measurements and the radiologists' perceptions of image quality, the noise and contrast performance of a...

12 citations


Proceedings ArticleDOI
Chun Moo Lo1
23 Dec 1980
TL;DR: In this paper, the authors present the investigation and the implementation of six candidates for FLIR image enhancement and show some experimental results, including variable threshold zonal filtering, statistical differencing operator, unsharp masking, prototype automatic target screener technique, constant variance, and histogram equalization.
Abstract: The goal of FLIR image enhancement is to obtain a good quality display by compressing the global scene dynamic range while enhancing the local area contrast. This paper presents the investigation and the implementation of six candidates for FLIR image enhancement and shows some experimental results. The six enhancement candidates are: (1) variable threshold zonal filtering, (2) statistical differencing operator, (3) unsharp masking, (4) prototype automatic target screener technique, (5) constant variance, and (6) histogram equalization. All the enhancement techniques make use of the local nonstationary mean, the local variance, or both, to achieve edge enhancement or contrast stretching in local regions. The local nonstationary mean and the local variance, in each case, are computed by a two-dimension rolling window averaging processor. Finally, an experiment based on subjective criteria to judge the enhanced image quality was conducted. The results showed that the variable threshold zonal filter, prototype automatic target screener, and unsharp masking methods were the superior techniques.

11 citations


01 Sep 1980
TL;DR: In this article, the authors used a scanning micro-densitometer to digitize ten medium scale (1:2700 to 1:4400) aerial photographs, typical of the imagery viewed by Air Force photointerpreters, and then multiplied by two Gaussian filter functions to yield two blurred and one ground truth level of each image and transformed to eight bits of intensity for output.
Abstract: : Ten medium scale (1:2700 to 1:4400) aerial photographs, typical of the imagery viewed by Air Force photointerpreters, were digitized to 4096 x 4096 files (20- micron aperture by 11 bits intensity) on a scanning microdensitometer. The image files were then multiplied by two Gaussian filter functions (Fourier domain) to yield two blurred and one ground truth level of each image and transformed to eight bits of intensity for output. One of four weightings of a 4096 x 4096 Gaussian noise file was added to each image file, yielding 3 Blur x 5 Noise x 10 Image combinations (150 images, total). Positive transparencies then served as the database for an information extraction task. Fifteen photointerpreters (PIs) from the 548th Reconnaissance Technical Group, Hickam AFB, Hawaii, served as subjects. Blur was a between-subjects variable with five PIs at each of three levels. Noise was a within-subjects variable (five levels). The Noise main effect was significant (p .01). The Blur main effect and the Blur x Noise interaction were not found to be statistically significant, although the Blur main effect was of the expected form. The data were correlated with image quality scaling values from a separate study using the same images and PIs. A significant correlation was found (r = .898).

10 citations


Proceedings ArticleDOI
01 Nov 1980
TL;DR: In this article, a number of variations on the basic block linking approach are investigated, and some tentative conclusions are drawn regarding preferred methods of initializing the process and of defining the links, yielding improvements over the originally proposed method.
Abstract: When an image is smoothed using small blocks or neighborhoods, the results may be somewhat unreliable due to the effects of noise on small samples. When larger blocks are used, the samples become more reliable, but they are more likely to be mixed, since a large block will often not be contained in a single region of the image. A compromise approach is to use several block sizes, representing versions of the image at several resolutions, and to carry out the smoothing by means of a cooperative process based on links between blocks of adjacent sizes. These links define "block trees" which segment the image into regions, not necessarily connected, over which smoothing takes place. In this paper, a number of variations on the basic block linking approach are investigated, and some tentative conclusions are drawn regarding preferred methods of initializing the process and of defining the links, yielding improvements over the originally proposed method.© (1981) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

9 citations


Proceedings ArticleDOI
16 Sep 1980
TL;DR: This work has developed another approach, based on Buchdahl's Optical Aberration Coefficients (OAC's), which replaces the summations in an ordinary spot diagram calculation with integrals over the aperture and thereby obtains the rms spot size.
Abstract: Many of the lens design computer programs used today optimize a design by minimizing a list of image errors which have been selected because of their relation to the quality of the image. In many cases, it is preferable to minimize quantities that are direct measures of the image quality of the design. An example of one such quantity is the rms spot size, which is commonly computed by tracing grids of rays through the lens. The method, while very convenient, can be an expensive way to optimize a lens because of the number of rays that must be traced to get reliable spot diagram statistics. We have developed another approach, based on Buchdahl's Optical Aberration Coefficients (OAC's). The OAC's form polynomial models of the imaging behavior of a lens and can therefore be used to compute its performance. We replace the summations in an ordinary spot diagram calculation with integrals over the aperture and thereby obtain the rms spot size. The answers are free from aperture sampling effects, and the optimization is 6:1 faster than a ray-trace based method using 8 rays at three positions in the field. We review the mathematical methods employed and show examples of the accuracy and cost effectiveness of the OAC approach in lens design.

Proceedings ArticleDOI
22 Aug 1980
TL;DR: Two phase retrieval methods are tested with two-dimensional simulated noisy speckle image data: a simple phase unwrapping algorithm and a Knox-Thompson algorithm used in combination with the blind deconvolution and the Welter-Worden autocorrelation seeing correction methods.
Abstract: Two phase retrieval methods are tested with two-dimensional simulated noisy speckle image data: a simple phase unwrapping algorithm and a Knox-Thompson algorithm These are used in combination with the blind deconvolution and the Welter-Worden autocorrelation seeing correction methods The final images are retouched with the iterative Fienup method© (1980) COPYRIGHT SPIE--The International Society for Optical Engineering Downloading of the abstract is permitted for personal use only

Journal ArticleDOI
TL;DR: A new preprocessing technique which removes unwanted notches and random noise generated by electronic scanners is presented, using a 3 element by 3 element mask, which results in a significant improvement in both subjective image quality and coding efficiency.
Abstract: A new preprocessing technique which removes unwanted notches and random noise generated by electronic scanners is presented. Using a 3 element by 3 element mask, the central picture element is altered from black to white, or vice versa, if the surrounding elements correspond to a predetermined pattern. A significant improvement in both subjective image quality and coding efficiency is observed.

Patent
28 Oct 1980
TL;DR: In this article, a test film is provided that is a translucent film (32) dimensioned and adapted for loading into a camera as photographic film, which allows an image to be viewed thereon when the shutter of the camera (12) is open, which image is identical to the image that would be captured by photographic film.
Abstract: A test film (10) is provided that is a translucent film (32) dimensioned and adapted for loading into a camera as photographic film. The translucent film material allows an image to be viewed thereon when the shutter of the camera (12) is open, which image is identical to the image that would be captured by photographic film. In accordance with the method of the present invention, the focus, picture image quality, shutter speed and flash synchronization of the camera (12) can be tested utilizing the test film (10) of the invention.

Patent
14 Mar 1980
TL;DR: In this paper, the authors proposed a method to prevent the deterioration in image quality in configuration of moving image without lowering the noise lowering effect, where the analogue image signal from the input terminal is converted into a digital image signal through A/D converter 1, written into the frame memory 3 through the adder 2, read out by delaying one frame period and obtained by the D/A converter 4.
Abstract: PURPOSE:To prevent the deterioration in image quality in configuration of moving image without lowering the noise lowering effect. CONSTITUTION:The analogue image signal from the input terminal is converted into a digital image signal through A/D converter 1, written into the frame memory 3 through the adder 2, read out by delaying one frame period and the analogue image output is obtained by the D/A converter 4. One frame delay image signal from the frame memory 3 is returned to the adder 2 through the moving vector correction circuit 9 and guided to the subtractor 5, and the difference therebetween is detected by the level detect circuit 6 and supplied to control circuit 7. In addition, one frame delay image signal from the frame memory 3 the image signal of the input side are supplied to the moving vector detecting circuit 8, the moving vector based on the movement of the image is detected and supplied to the moving vector correction circuit 9 to carry out the correction of the moving vector.

Patent
25 Dec 1980
TL;DR: The surface of a photoreceptor which performs exposure, development and transfer repeatedly is arranged as to be renewed to the other photoreceptors surface when the specified conditions hold as mentioned in this paper, which provides such a recording image quality control method which makes the image quality right after the renewal of the photoreceptive 11 surface approximate to that just before the renewal by controlling at least one of the physical quantities, such as the charger voltage of a charging charger 12, the light emission intensity of a lighting lamp 6 and developing bias voltage, that affect the recorded images.
Abstract: PURPOSE:To perform recording of approximately contant image quality even if there is renewal of photoreceptor surface by controlling at least one of the physical quantities that affect the quality of the recorded imges. CONSTITUTION:The surface of a photoreceptor 11 which performs exposure, development and transfer repeatedly is so arranged as to be renewed to the other photoreceptor surface when the specified conditions hold. These provide such a recording image quality control method which makes the image quality right after the renewal of the photoreceptor 11 surface approximate to that just before the renewal by controlling at least one of the physical quantities, such as the charger voltage of a charging charger 12, the light emission intensity of a lighting lamp 6 and developing bias voltage, that affect the quality of the recorded images.

Book ChapterDOI
TL;DR: In this article, the authors used a Lockheed-designed digital integrator for high-energy real-time radiography to challenge the image quality of film, which is a device that digitally sums television (TV) frames or effectively increases the exposure time.
Abstract: The use of a Lockheed-designed digital integrator has allowed high-energy real-time radiography to challenge the image quality of film. The integrator is a device that digitally sums television (TV) frames or effectively increases the exposure time. The integrator is being used as part of a high-energy radiography system that typically uses a 15-MeVp radiation source to penetrate the absorber, for example, 1.5 m (5 ft) of propellant. The real-time X-ray camera converts the X-ray image into an electronic signal in standard TV format. The TV frames are fed to the integrator. The high image quality of the Lockheed integrator results from the large memory used, precise timing, equal weighting of the frames, and the capability for repetitive integration. The nominal 512-by-512 pixel array and the precise timing give high resolution. Eight-bit digitization into and out of the integrator reduces digitization noise well below the eye's threshold for detection. The nominal 9-min-maximum integration time permits radiography through extremely thick absorbers or with low-flux sources. The presence of a refresh memory permits the inspection of a full-quality image while the remainder of the memory is being used to integrate a new frame. It also makes possible repetitive integration, which makes integration of radiographs of objects in motion feasible. Finally, image subtraction permits the removal of system anomalies or image gradients or both.

Journal ArticleDOI
TL;DR: In this article, the signal-to-noise ratios computed from these NEQ values characterize the performance of an ideal observer confronted with the image and a detection task, i.e., image processing and display for human acquisition.
Abstract: The square root of the density of exposure quanta at the image plane is a popular measure of image quality for photon images. This is in fact only an upper limit which is rarely realized in the image. It is the density of noise equivalent quanta (NEQ) deduced from image measurements which is relevant to real imagery. In general NEQ is a non-trivial function of spatial frequency. Examples are taken from screen/ film mammography, electrostatic mammography, and computed tomography. The signal-to-noise ratios computed from these NEQ values characterize the performance of an ideal observer confronted with the image and a detection task. This sets goals for real observer performance, i. e., image processing and display for human acquisition.

Journal ArticleDOI
TL;DR: An all-spherical-mirror system for applications in coherent image processing is described and analyzed and the Fourier transform properties of this one-to-one system are acceptable for many applications.
Abstract: An all-spherical-mirror system for applications in coherent image processing is described and analyzed. This one-to-one system is panchromatic and can be made to have minimal cosmetic defects. Such a system offers advantages such as multiple wavelength operations and the introduction of minimal scattering noise into the final image. A sample design that is diffraction-limited (for f/8) over the entire area of a standard 35-mm slide is given. The Fourier transform properties of this system are acceptable for many applications.


Journal ArticleDOI
William J. Dallas1
TL;DR: A continuous, as opposed to sampled, mathematical description applicable to 3-D image reconstruction from 2-D projections is applied to three examples of computer tomography, tomOSynthesis, and coded-source tomosynthesis.
Abstract: A continuous, as opposed to sampled, mathematical description applicable to 3-D image reconstruction from 2-D projections is applied to three examples. These examples are computer tomography, tomosynthesis, and coded-source tomosynthesis. The effects of various projection filtering procedures on reconstructed image quality are discussed for each example.

Proceedings ArticleDOI
11 Nov 1980
TL;DR: The emphasis in this paper is on utilization of digitally-derived power spectral estimates of image quality computed before hard copy imagery is produced.
Abstract: Frequency domain measures of image quality have been used successfully for some years to predict human assessment of image quality. These measures were derived from the optical power spectrum. The emphasis in this paper is on utilization of digitally-derived power spectral estimates of image quality computed before hard copy imagery is produced.


Journal ArticleDOI
TL;DR: In this paper quantitative analysis is made based on the calculation of entropy and the ideal data compression rate when this signal processing system is used as a preprocessing and a considerable improvement of theData compression rate is shown.
Abstract: Recently, numerous researches have been made on data compression for digital facsimile transmission and several papers reported on the coding system in which certain signal processing is applied to the local part of the original image before encoding. This facilitates the encoding and improves the data compression rate. This paper proposes a new system which utilizes a signal processing based on the majority principle. This system can be implemented relatively easily by hardware. The image quality reproduced by this system is quite acceptable and can be improved by high-density sampling. In this paper quantitative analysis is made based on the calculation of entropy and the ideal data compression rate when this signal processing system is used as a preprocessing. A considerable improvement of the data compression rate is shown. In addition, comparison between the proposed system and the existing conventional systems is made.

Proceedings ArticleDOI
17 Dec 1980
TL;DR: A digital test target system is described which may be used to evaluate digital imaging systems and line-scan and digital displays in a fashion similar to the use of resolution targets for the evaluation of optical analogue image systems.
Abstract: A digital test target system is described which may be used to evaluate digital imaging systems and line-scan and digital displays (soft or hard copy) in a fashion similar to the use of resolution targets for the evaluation of optical analogue image systems. The digital test target system consists of a basic pattern of checkerboards of different sizes, which is repeated with different commanded "brightness" and "contrast" levels; a procedure for reading the target; and a method of adjusting the reading scores to obtain an index of image quality. The target system presently consists of an eight-target set, arranged for evaluation of 512 pixel square matrix display having eight bit gray level capability. The system is applicable to other sized display matrices and gray level capability. An index is derived from the readings so that displays having similar image quality index values will have the same average image quality performance regardless of differences in matrix size and nominal gray level capability.© (1980) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
18 Aug 1980
TL;DR: A total low dose fluoroscopy system has been developed, using a Siemens Videomed H 1023 line 25MHz television system, and a modified Matrix Videoimager, multi-image camera for recording spot films, with clinically acceptable image quality.
Abstract: A total low dose fluoroscopy system has been developed,using a Siemens Videomed H 1023 line 25MHz television system, and a modified Matrix Videoimager, multi-image camera for recording spot films. The radiation dose during fluoroscopy has been reduced by using pulses of radiation at fluoroscopic intensities approximately 17ms. wide, storing the image on a V.A.S. video disc recorder during the pulse, and playing the stored image back through the fluoroscopic monitor. The combination of these methods has resulted in dose and cost reductions of the order of 90%, with clinically acceptable image quality. With minor modification the system could be made compatible with any commercially available high line rate, high resolution television system.© (1980) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

DOI
01 Sep 1980
TL;DR: The application of standard psychophysical techniques to the measurement of the quality of noisy images is shown using examples taken from clinical radionuclide imaging, showing that smaller changes in intensity can be detected on digital images than on the analogue ones.
Abstract: The application of standard psychophysical techniques to the measurement of the quality of noisy images is shown using examples taken from clinical radionuclide imaging. In clinical radionuclide imaging the distribution of a radiopharmaceutical injected into a patient is mapped in vivo by using a scintillation detector to record the emitted gamma rays. Areas of above or below average radiopharmaceutical concentration may indicate clinical abnormalities, such as tumours. As the images have only a few hundred photons/cm2, random (Poisson) fluctuations in intensity are large and may mask changes in radiopharmaceutical concentration. The quality of these images is assessed by using either the method of constant stimulus or signal-detection theory to measure an observer's ability to detect small changes in image intensity. The method of constant stimulus is used to compare analogue with digital images. It is shown that smaller changes in intensity can be detected on digital images than on the analogue ones. Signal detection theory is used to measure the effect of filtering on image quality. For a given rate of false positive responses, the filtered images produce a higher true positive response rate than for the unfiltered ones.


Journal ArticleDOI
TL;DR: A multi-image camera has been modified to record spot films from the video disc image resulting in marked reduction in patient radiation and film cost, and initial findings incorporating pulsed fluoroscopy using a video disc are discussed.
Abstract: The image quality of photofluorographic spot films (70, 100 or 105mm) has bee gradually improving as high resolution image intensification has evolved. With newly available 1023-line fluoroscopic monitors providing 2.8 line pairs/mm resolution, it is now possible to photograph diagnostic images directly off the monitor. Such images provide detail similar to that currently available on 100mm spot films. A multi-image camera has been modified to record spot films from the video disc image resulting in marked reduction in patient radiation and film cost. Initial findings incorporating pulsed fluoroscopy using a video disc are discussed.

Book ChapterDOI
01 Jan 1980