scispace - formally typeset
Search or ask a question

Showing papers on "Image processing published in 1973"


Journal ArticleDOI
TL;DR: In this paper, mathematical expressions for the Wiener spectrum of the image of a point source were obtained for angular frequencies much less than, and much greater than, the conventional seeing limit.
Abstract: A new technique (speckle interferometry) has been developed by Gezari, Labeyrie, and Stachnik, which allows the measurement of stellar diameters from a series of photographs obtained from large-aperture ground-based telescopes. The series of photographs is processed to obtain the Weiner spectrum of the photographic image, i.e., the ensemble-averaged modulus-squared Fourier transform obtained from the series of images. Gezari, Labeyrie, and Stachnik have measured stellar diameters as small as 0″.05, about 20 times better than is usually possible. In this paper, mathematical expressions are obtained for the Wiener spectrum of the image of a point source. As is well known, the Wiener spectrum of the image of an extended, incoherently radiating object, is expressible as a product of this point-source spectrum and the object spectrum. Calculations are performed using the Rytov approximation and assuming that the underlying atmospheric turbulence is describable by a Kolmogorov spectrum. Asymptotic closed-form expressions are obtained for angular frequencies much less than, and much greater than, the conventional seeing limit. In the latter case, the Wiener spectrum is found to be proportional to the optical transfer function.

234 citations



Journal ArticleDOI
TL;DR: The various classes of functions which can be implemented in the cellular array are discussed and sample programs explained in detail.

113 citations


Journal ArticleDOI
TL;DR: The method shows considerable promise for high-resolution imaging of fine-grain structure on the solar surface and the applicability of the technique in astronomical imaging is considered.
Abstract: A method for recording and restoring images that have been degraded by unknown aberrations is described. Several images are recorded with different masks placed in the pupil of the optical system. If these images are Fourier analyzed and certain phases are combined algebraically in the proper manner, the effects of the unknown aberrations can be removed, with an uncertainty concerning the absolute location of the object with respect to the imaging system. Experimental results are reported. The sensitivity of the method is analyzed, and the applicability of the technique in astronomical imaging is considered. The method shows considerable promise for high-resolution imaging of fine-grain structure on the solar surface.

96 citations


Journal ArticleDOI
TL;DR: It appears that an iterative prediction and correction method can be suitable for the efficient and accurate measurement of elevation from stereo images made on stereo pairs of aerial images.

85 citations



Book ChapterDOI
01 Jan 1973
TL;DR: An attempt has been made to present the material in a way that is ordered according to typical problems of electron microscopy, rather than according to methods of image analysis.
Abstract: Within the past few years, image processing methods have been introduced into a number of fields where experimental visual data have to be analyzed. Examples in the biological field are radiotherapy (Selzer, 1968) and cytology (Mendelsohn et al., 1968). The implementation in electron microscopy is presently developing very fast. The present work gives a review of some experiences in computer analysis of electron microscopic image data, and tries to show some prospects for the use of this tool in the near future. An attempt has been made to present the material in a way that is ordered according to typical problems of electron microscopy, rather than according to methods of image analysis.

62 citations


Patent
29 Jun 1973
TL;DR: In this paper, a special purpose pipeline digital computer is disclosed for processing a pair of related, digitally encoded, images to produce a difference image showing any dissimilarities between the first image and the second image.
Abstract: A special purpose pipeline digital computer is disclosed for processing a pair of related, digitally encoded, images to produce a difference image showing any dissimilarities between the first image and the second image. The computer is comprised of a number of special purpose pipeline processors linked to a supervisory general purpose processor. First, a initial image warp transformation is computed by a spatial transformation pipeline processor using a plurality of operator selected, feature related, match points on the pair of images, and then, image correlation is performed by a dot product processor working with a square root and divide processor to identify the exact matching location of a second group matching points selected in a geometrical pattern, on the pair of images. The final image warp transformation to achieve image registration occurs in the spatial transformation processor, using a localized polylateral technique having the geometrically selected match points as the vertrices of the polylaterals. Finally, photoequalization is performed and the difference image is generated from the pair of registered images by a photoequalization processor.

56 citations


Journal ArticleDOI
TL;DR: The vector processing results in a simpler and more accurate image enhancement algorithm in comparison with scalar processing, and the performance of the vector and scalar estimators is compared.
Abstract: A new approach to design of a recursive image enhancer is introduced when the image is characterized statistically by its mean and correlation function. A vector linear dynamical model is derived to represent the statistics of the processor output when several lines of the picture are processed simultaneously. Based on the vector model, a Kalman filter is designed and utilized to recursively enhance the image. The vector processing results in a simpler and more accurate image enhancement algorithm in comparison with scalar processing. Two examples, one with very low signal-to-noise ratio, are used to illustrate the effectiveness of the procedure. Finally, the performance of the vector and scalar estimators is compared.

44 citations


Patent
13 Sep 1973
TL;DR: In this article, a mathematical processing of a sequence of pulse signal time functions, produced by an ultrasonic imaging scanner, is described, where the reflected signal time function values are stored, rearranged in sequence, and convoluted with a specified filter function to produce a processed image signal-time function for imaging.
Abstract: This invention comprises apparatus for mathematical processing of a sequence of pulse signal time functions, produced by an ultrasonic imaging scanner. The basic processor described herein has a slower processing rate than the input rate of backreflected pulse signal time function values. The reflected pulse signal time function values are stored, rearranged in sequence, and convoluted with a specified filter function to produce a processed image signal time function for imaging. Specific electronic apparatus is utilized including a digital computer with stored internal program for fourier convolution and filter function synthesis.

34 citations


Journal ArticleDOI
TL;DR: It is shown that a large class of space-variant operations may be decomposed into the cascade of invertible coordinate transformations and space-invariant systems; a block-diagram representation of this technique is presented.
Abstract: A complete analysis of motion degradation in two-dimensional linear incoherent optical systems is presented from a system viewpoint. An equivalent space-variant point-spread function is derived from knowledge of the system motion, and the special characteristics of space-invariant and one-dimensional systems are considered. The analysis includes the effects of two different models for geometrical distortion in motion-degraded imaging and derives the distortions for aerial imaging. It is shown that a large class of space-variant operations may be decomposed into the cascade of invertible coordinate transformations and space-invariant systems; a block-diagram representation of this technique is presented. A number of examples of common types of motion blur are included, as well as computer simulations of space-variant blur and the decomposition process.

Journal ArticleDOI
TL;DR: In this article, a real-time coherent-to-coherent optical converter has been demonstrated for Fourier transform image processing using photoconductivity effect in the single-crystalline electrooptic material, Bi 12 SiO 20 to spatially modulate electrical polarization.
Abstract: This paper describes the fabrication and demonstration of a real-time incoherent-to-coherent optical converter having applications in image processing systems. Briefly, it utilizes the photoconductivity effect in the single-crystalline electrooptic material, Bi 12 SiO 20 to spatially modulate its electrical polarization. An optically absorbed write-in image is stored as an image wise polarization pattern in the device. Readout is accomplished electrooptically by means of the subsequent phase retardation that a polarized beam of coherent light undergoes in passing through the Bi 12 SiO 20 . An operating mode for achieving continuous image conversion with high-speed recyclability is described. The following performance characteristics have been demonstrated : write-in intensity of 300 µW/cm2at 400 nm yielded a contrast ratio of 50:1 after 40-ms exposure. When the converter was operated at a frame rate of 10/s, a sampled read-out contrast ratio of greater than 1000:1 was achieved. Resolution in excess of 80 1p/mm has been recorded and read out. The crystal growing and device fabrication methods by which 1-square-in converters have been built and operated are described. Results achieved in using this device to Fourier transform images are also presented.

01 Oct 1973
TL;DR: Theoretical backgrounds and digital techniques for a class of image processing problems are presented and methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.
Abstract: Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.

Journal ArticleDOI
TL;DR: The Mertz and Gray analysis of image scanning in terms of multidimensional Fourier transforms is reformulated using a different coordinate system, leading to scanning equations that emphasize the importance of the scanning apertures.
Abstract: The Mertz and Gray analysis of image scanning in terms of multidimensional Fourier transforms is reformulated using a different coordinate system. A simplified mathematical notation is used for this analysis leading to scanning equations that emphasize the importance of the scanning apertures. Other image processes involving scanning are described in terms of their multidimensional Fourier transforms. These include a novel two-dimensional screening technique, and its comparison with a simple screening method and with conventional photographic screening processes. The effect on video spectra of aperture filtering and an analysis of television line interlacing are also given. The conditions for the validity of the unidimensional analysis of some of these image processing systems are indicated.


01 Dec 1973
TL;DR: It is demonstrated that adaptive transform domain modeling is important, and that large-size transforms, in conjunction with the proper image model, can significantly outperform block-encoding techniques.
Abstract: : The report includes a detailed analysis of image modeling aspects of the transform coding problem. Two alternate prediction algorithms are analyzed for the transform sample variance estimation: the first technique uses a two- dimensional polynomial to model the image power spectral density; the second technique is a simple recursive approach based on previously quantized values. The generalized phase concept is developed and plays a vital role in the coding algorithms. Both the Fourier and Walsh transforms are used, the former being demonstrated to have superior performance. A non-negative image constraint is explored via the Lukosz bound. The experimental phase of the study includes two dimensional coding of monochrome, and three dimensional coding of color, as well as interframe images with coding at 0.38, 0.55, and 0.25 bits/pixel, respectively. It is demonstrated that adaptive transform domain modeling is important, and that large-size transforms, in conjunction with the proper image model, can significantly outperform block-encoding techniques.

Journal ArticleDOI
TL;DR: Some recent work on filters synthesis will be summarized, and the implications in optical computation, image processing (enhancement), image restoration, image restoration and image detection examined.



Journal ArticleDOI
TL;DR: A nonlinear picture-to-picture transformation is introduced and compared with a straightforward linear transformation and is believed to be superior to many techniques presently being employed.
Abstract: A nonlinear picture-to-picture transformation is introduced and compared with a straightforward linear transformation. The criterion for comparison is based on the desire to smooth noisy or textured regions while retaining edge definition. Selected examples from a large number of simulations are presented. The transformation presented is believed to be superior to many techniques presently being employed.

Patent
Mutsuo Ogawa1
31 May 1973
TL;DR: In this paper, the analog image signals are compared with a threshold level signal which alternates, in square wave fashion, between two different threshold levels many times during a single scanline.
Abstract: A system for converting optical images into digital image signals characterized by low redundancy and for transmitting said digital signals. An image is scanned at a low sensitivity level for the first scanline, a high sensitivity level for the second scanline, a low sensitivity level for a third scanline, etc., to generate a succession of analog image signals each corresponding to a single scanline. Each of the analog image signals is compared with a threshold level signal which alternates, in square wave fashion, between two different threshold levels many times during a single scanline. the comparison results in a transmission signal which is at 0 when the analog image signal is within a first theshold span, which alternates between 0 and 1 when the analog image signal is within a second threshold interval, and which is at a steady 1 when the analog image signal is within a third threshold interval.

Patent
09 Aug 1973
TL;DR: In this paper, a large display system capable of displaying a video image receives video signals, quantizes those signals to produce a digital code capable of representing variations in the light content of the image and processes the digital code so to control individual display devices on a large matrix of such devices to have different levels of visibility to reproduce the video image for viewing by a large audience.
Abstract: A large display system capable of displaying a video image receives video signals, quantizes those signals to produce a digital code capable of representing variations in the light content of the image and processes the digital code so to control individual display devices on a large matrix of such devices to have different levels of visibility to thereby reproduce the video image for viewing by a large audience. A data processor is utilized to store the digital representation of the video image in memory so that on line or off line presentations can be made.

01 Jan 1973
TL;DR: Multiple z-divided filtering appears to be justified, and initial results at minimum encourage further research into the possibility that this technique may become a method of choice.
Abstract: A nonstationary method, multiple z-divided filtering, and a nonlinear method, biased smearing have been applied to scintigrams. Biased smearing does not appear to hold much promise. Multiple z-divided filtering, on the other hand, appears to be justified, and initial results at minimum encourage further research into the possibility that this technique may become a method of choice. (auth)

Book ChapterDOI
TL;DR: The purpose of these remarks is to present a brief description of some techniques of image processing and to offer a few illustrative examples of the ways in which image processing might contribute to studies of red cell shape.
Abstract: The purpose of these remarks is to present a brief description of some techniques of image processing and to offer a few illustrative examples of the ways in which image processing might contribute to studies of red cell shape. Image processing refers to the application of automatic techniques, both digital and analog, to visual images so as to transform the images or to extract quantitative information from them. While it has been known for some time that one can adequately represent a visual image by a mathematical function, it is only within the past 10–15 years that the electronic technology has made it practicable for images to be translated into numerical form and manipulated by computer programs. A number of working groups [8, 9] have already applied image processing to a variety of problems of biological and medical interest; the applications to health science appear to be particularly attractive and numerous, perhaps because so much life science research depends upon visual inspection of biological objects.

Journal ArticleDOI
N.H. Kreitzer1, W.J. Fitzgerald
TL;DR: A core-refreshed video display system that can display gray-scale images of 32 intensity levels on a standard monochrome video monitor will be described, which permits simultaneous display of images before and after processing.
Abstract: A core-refreshed video display system that can display gray-scale images of 32 intensity levels on a standard monochrome video monitor will be described. The system can also display flicker-free black and white images of more than 800 000 picture elements. There are special features that allow overlaying black and white images on 16-level gray-scale images and manual cursor control via an X-Y tablet. Multiple reduced size images can be accommodated by features that allow independent manipulation of images in separate areas on the display screen. This permits simultaneous display of images before and after processing.

01 Jan 1973
TL;DR: The JPL VICAR image processing system has been used for the enhancement of images received from the ERTS for the Arizona geology mapping experiment as mentioned in this paper, which contains flexible capabilities for reading and repairing MSS digital tape images, for geometric corrections and interpicture registration, for various enhancements and analyses of the data, and for display of the images in black and white and color.
Abstract: The JPL VICAR image processing system has been used for the enhancement of images received from the ERTS for the Arizona geology mapping experiment. This system contains flexible capabilities for reading and repairing MSS digital tape images, for geometric corrections and interpicture registration, for various enhancements and analyses of the data, and for display of the images in black and white and color.

01 Sep 1973
TL;DR: The reduced data record (RDR) as mentioned in this paper is the set of data which results from the distortion-removal, or decalibration, process of the Mariner 9 television experiment using two cameras to photograph Mars from an orbiting spacecraft.
Abstract: The Mariner 9 television experiment used two cameras to photograph Mars from an orbiting spacecraft. For quantitative analysis of the image data transmitted to earth, the pictures were processed by digital computer to remove camera-induced distortions. The removal process was performed by the JPL Image Processing Laboratory (IPL) using calibration data measured during prelaunch testing of the cameras. The Reduced Data Record (RDR) is the set of data which results from the distortion-removal, or decalibration, process. The principal elements of the RDR are numerical data on magnetic tape and photographic data. Numerical data are the result of correcting for geometric and photometric distortions and residual-image effects. Photographic data are reproduced on negative and positive transparency films, strip contact and enlargement prints, and microfiche positive transparency film. The photographic data consist of two versions of each TV frame created by applying two special enhancement processes to the numerical data.

Journal ArticleDOI
TL;DR: A number of processes used to modify various aspects of pictures to enhance the ability of the human photo interpreter in extracting information are illustrated in this paper.
Abstract: In recent years the modern digital computer has been used to process images, to emphasize details, to sharpen pictures, to modify the tonal range, to aid picture interpretation, to remove anomalies, and to extract quantitative information A price to be paid for this extreme flexibility in handling linear and non-linear operations is that a number of anomalies caused by the camera, such as geometric distortion, MTF roll-off, vignetting, and non-uniform intensity response must be taken into account or removed to avoid their interference with the information extraction process Once this is done, computer techniques may be used to emphasize details, perform analyses, classify materials by multi-variate analysis (usually multi-spectral), detect temporal differences, etc Digital processing may also be used to modify various aspects of pictures to enhance the ability of the human photo interpreter in extracting information A number of these processes are illustrated in this paper

R. Bernstein1
01 Jan 1973
TL;DR: In this paper, the quality and resolution of the digitally processed images are very good, due primarily to the fact that the number of film generations and conversions is reduced to a minimum Processing times of digitally processed image are about equivalent to the NDPF electro-optical processor.
Abstract: ERTS-1 MSS and RBV data recorded on computer compatible tapes have been analyzed and processed, and preliminary results have been obtained No degradation of intensity (radiance) information occurred in implementing the geometric correction The quality and resolution of the digitally processed images are very good, due primarily to the fact that the number of film generations and conversions is reduced to a minimum Processing times of digitally processed images are about equivalent to the NDPF electro-optical processor

DOI
20 Feb 1973
TL;DR: The approach is to associate quantitative signal-to-noise ratios with simple geometric images as developed by electro-optical sensors, to determine the observer's SNR needs through psychophysical experimentation and then to correlate the detectability of these simple images with the visual discrimination of the images of real objects.
Abstract: Electro-optical sensors can be of significant aid to our law enforcement agencies particularly if their capabilities and limitations are fully understood. In the following, the imaging process is discussed as it applies to the needs and requirements of security, surveillance and law enforcement. Our approach is to associate quantitative signal-to-noise ratios with simple geometric images as developed by electro-optical sensors, to determine the observer's SNR needs through psychophysical experimentation and then, through further psychophysical experimentation, to correlate the detectability of these simple images with the visual discrimination of the images of real objects. The visual discrimination tasks we consider are simple image detection and the higher order tasks of recognition and identification. The concepts developed form a rational basis for the selection of electro-optical equipments which have a reasonable expectation of actually performing a desired function.