scispace - formally typeset
Search or ask a question

Showing papers on "Image processing published in 1970"


Dissertation
01 Nov 1970
TL;DR: A method will be described for finding the shape of a smooth opaque object from a monocular image, given a knowledge of the surface photometry, the position of the light-source and certain auxiliary information to resolve ambiguities, complementary to the use of stereoscopy.
Abstract: A method will be described for finding the shape of a smooth opaque object from a monocular image, given a knowledge of the surface photometry, the position of the light-source and certain auxiliary information to resolve ambiguities This method is complementary to the use of stereoscopy which relies on matching up sharp detail and will fail on smooth objects Until now the image processing of single views has been restricted to objects which can meaningfully be considered two-dimensional or bounded by plane surfaces It is possible to derive a first-order non-linear partial differential equation in two unknowns relating the intensity at the image points to the shape of the object This equation can be solved by means of an equivalent set of five ordinary differential equations A curve traced out by solving this set of equations for one set of starting values is called a characteristic strip Starting one of these strips from each point on some initial curve will produce the whole solution surface The initial curves can usually be constructed around so-called singular points A number of applications of this method will be discussed including one to lunar topography and one to the scanning electron microscope In both of these cases great simplifications occur in the equations A note on polyhedra follows and a quantitative theory of facial make-up is touched upon An implementation of some of these ideas on the PDP-6 computer with its attached image-dissector camera at the Artificial Intelligence Laboratory will be described, and also a nose-recognition program

634 citations




Journal ArticleDOI
TL;DR: Results of digital image processing techniques developed at JPL have been applied to images from a wide range of disciplines and the system considerations involved in adequately and efficiently doing image processing are discussed briefly.
Abstract: The extreme flexibility of the digital method of image processing makes a wide variety of linear and nonlinear processes possible. The digital image processing techniques developed at JPL have been applied to images from a wide range of disciplines. Results of this processing for picture generation, intensity and geometric manipulation, spatial frequency operations, and image analysis are shown. The system considerations involved in adequately and efficiently doing image processing are discussed briefly.

118 citations


Journal ArticleDOI
TL;DR: In this paper a grey-weighted skeleton is defined for grey-valued continuous and quantized images, and a procedure for the inversion of agrey-weighting skeleton is proposed for obtaining a binary reconstructed image from aGrey-Weighted skeleton.
Abstract: Skeletons have been largely used as descriptors of shape in the field of image processing. Only binary pictures, however, have been considered so far. In this paper a grey-weighted skeleton is defined for grey-valued continuous and quantized images. In order to extend the invertibility property of the skeleton to the grey case, a transformation is defined, which is a generalization of both direct and inverse binary skeleton transformations. By taking advantage of the properties of this transformation, a procedure for the inversion of a grey-weighted skeleton (i.e., for obtaining a binary reconstructed image from a grey-weighted skeleton) is finally proposed.

113 citations


Journal ArticleDOI
TL;DR: In this paper, a theory of image formation is presented for a large-angle, point reference hologram, whose recording arrangement consists of a surface of arbitrary shape, a point reference source, and the object.
Abstract: A theory of image formation is presented for a large-angle, point reference hologram, whose recording arrangement consists of a surface of arbitrary shape, a point reference source, and the object. The hologram is illuminated by a spherical wave during reconstruction. The resulting image field is similar to that of a Fourier-transform hologram. An exact, integral formulation of monochromatic, scalar diffraction theory is used to find the image field. The hologram is modeled by surface sources determined from the irradiance of the recorded field. The image field produced by the holographic system approximates the field produced by the ideal system, which forms the image of a point object by launching a converging, spherical wave.

104 citations


Journal ArticleDOI
TL;DR: The fundamental identity between the operations of vector convolution and polynomial multiplication is exploited to provide a general-purpose alternative to the method of spatial filtering for digitally deconvolving noisy, degraded images of incoherently illuminated objects.
Abstract: The fundamental identity between the operations of vector convolution and polynomial multiplication is exploited to provide a general-purpose alternative to the method of spatial filtering for digitally deconvolving noisy, degraded images of incoherently illuminated objects. The method is remotely related to those of linear programming, but differs significantly from them in its exploitation of the special properties of convolution. Sampled image arrays are treated as points in euclidean n space. The convolution relation, together with bounds on individual recorded and point-spread image irradiance values, defines a set of linear constraints on the restored image-irradiance values. These constraints define a convex region of possible restorations in n space. A method is described for selecting a point (i.e., an estimate of the restored image) from near the center of this region. The human viewer may then readjust the original constraints to reflect the new information revealed by his interpretation of the restored-image estimate. The deconvolution calculations can then be repeated with the readjusted constraints, to yield a possibly better estimate. The method is applicable to restoration problems in which both the recorded image and the point-spread image contain noise. Furthermore, it is applicable to any problem requiring the numerical solution of a convolution equation involving measured data. The connection with Fourier-transform theory and comparison with spatial-filtering methods are touched upon briefly. A few computer restorations are shown, to illustrate the practicality and potential of the method.

55 citations


Journal ArticleDOI
TL;DR: This work examines degradations for the binary Fourier transform hologram and presents a method by which the plotting procedure may be designed so as to yield a most faithful reconstructed image.
Abstract: Generation of holograms by computer allows the possibility of better controlling the hologram formation process and of displaying a synthesized image in the case where the object does not exist physically. However, limitations of equipment used to plot the hologram can cause degradation of the reconstructed image. We examine these degradations for the binary Fourier transform hologram and present a method by which the plotting procedure may be designed so as to yield a most faithful reconstructed image. Experimental results which support the analysis are included.

55 citations


Journal ArticleDOI
TL;DR: The description of a machine that performs a variety of image processing operations is given, together with a theoretical discussion of its operation.
Abstract: The description of a machine that performs a variety of image processing operations is given, together with a theoretical discussion of its operation. Spatial processing is performed by corrective convolution techniques. Density processing is achieved by means of an electrical transfer function generator included in the video circuit. Examples of images processed for removal of image motion blur, defocus, and atmospheric seeing blur are shown.

46 citations


01 Jan 1970
TL;DR: An experimental computer program is described that analyzes pictures taken in a simple, but nevertheless real-world, robot environment by building up, step by step, a partial line drawing representation of a digitized television picture.
Abstract: : This paper describes an experimental computer program that analyzes pictures taken in a simple, but nevertheless real-world, robot environment. The analysis proceeds by building up, step by step, a partial line drawing representation of a digitized television picture. An interesting feature of the system is an executive program that uses detailed knowledge of the environment to control other programs that extract the partial line drawing. Examples are given to illustrate the operation of this experimental program.

16 citations


Patent
02 Mar 1970
TL;DR: In this paper, a hollow beam of monochromatic light is deflected repetitively through a scanning pattern and focused upon an image plane in which is located an object desired to be scanned.
Abstract: A hollow beam of monochromatic light is deflected repetitively through a scanning pattern and focused upon an image plane in which is located an object desired to be scanned. A detector senses light reflected from or transmitted through the object to develop video signals that at each instant of time represent light intensity. These video signals are fed to an image reproducer that creates an image of the object in synchronization with the scanning pattern. The apparatus further includes a filter system that effects a modification of the video signals in order to restore image contrast which otherwise is seriously degraded as a consequence of utilizing a hollow light beam in order to obtain an increase in depth of field for the optical system. The disclosure also concerns using the apparatus for video recording as well as reproducing.

PatentDOI
TL;DR: The mechanism of noise reduction is described for spatial frequency diversity systems, and some experiments are described that illustrate noise reduction and multiple filtering possibilities of this technique.
Abstract: Apparatus for improving the quality of an input transparency used in an optical processor. The input transparency is illuminated with coherent light which has been passed through a fine grating or grid. The grid causes the transparency to be broken up into a plurality of diffraction patterns. When the image is reconstructed in the output plane, there is a marked decrease in the localization of reflection noise which would otherwise have been caused by the system.

Patent
06 Apr 1970
TL;DR: A total image storage and retrieval system has a copier, step and repeat camera for photographing documents on microfilm frames, and means for developing the microfilm and assembling the frames on microfiche cards that are encoded and stored in a large capacity storage bin having associated means for automatically retrieving the micro-fiche as mentioned in this paper.
Abstract: A total image storage and retrieval system has a copier, step and repeat camera for photographing documents on microfilm frames, and means for developing the microfilm and assembling the frames on microfiche cards that are encoded and stored in a large capacity storage bin having associated means for automatically retrieving the microfiche. One or more retrieval stations, each typically having a keyboard for selecting images sought to be retrieved and a display, such as a television tube, for displaying the selected retrieved images, includes means for signaling the data file bin to produce a specific image, scan the image thus retrieved, store the scanned retrieved image and transmit the stored scanned retrieved image to designated displays, typically the display tube associated with the interrogating keyboard, or to a printer that may provide a copy of the retreived image in permanent form, and/or to data processing machines to interpret the information or data for further transmission or processing with other pertinent information and/or data. A specific image may be produced by writing data on a film by an optical system, which may include a laser source. The data may be read by a second optical system, such as an optical fiber detector array.

Journal ArticleDOI
TL;DR: An optical image processor works by corrective convolution rather than by complex spatial filtering, and an incorporated flexible function generator permits quantization, contour generation and other non-linear operations to be performed in the playback, with or without the optical processing.

Journal ArticleDOI
TL;DR: In this article, a 3D reconstruction of the aortic bifurcation from magnetic resonance angiograms is presented, which not only provides 3D visualisation of the structure, but also serves as an interface for further quantitative analysis of the fluid dynamics in the model.
Abstract: Magnetic Resonance Imaging (MRI) can produce a series of parallel 2D cross-sectional images of an arterial vessel. Based on these images, the threedimensional structure of such a vessel can be obtained by image processing techniques. In this paper, a novel image processing and three-dimensional reconstruction approach is presented which permits the recreation of the aortic bifurcation from magnetic resonance angiograms. The reconstruction output not only provides 3D visualisation of the bifurcation structure, but also serves as an interface for further quantitative analysis of the fluid dynamics in the model.

Patent
22 May 1970
TL;DR: In this article, a multispectral laser camera made up of a plurality of energy producing means which scan a target area is used to produce a true color image of the target after being processed through a series of photomultiplier tubes, amplifiers and modulators.
Abstract: A multispectral laser camera made up of a plurality of energy producing means which scan a target area. The resultant reflected light off the target produces a true color image of the target after being processed through a series of photomultiplier tubes, amplifiers and modulators. By selectively varying the connections between the above elements a false color image of the scanned target may also be produced. This false color image overcomes the difficulties in detecting camouflaged targets.

Journal ArticleDOI
TL;DR: Digital image processing techniques have been developed at the Jet Propulsion Laboratory which enable us to characterize these various distortions using calibration data, and to remove them from images returned by the two Mariner spacecraft.

Journal ArticleDOI
01 Jan 1970
TL;DR: This paper describes signal processing in the encrypted domain, i.e., that after encryption but before decryption, where signal processing operations can be directly applied to encrypted signals without decrypting of encrypted signals.
Abstract: This paper describes signal processing in the encrypted domain, i.e., that after encryption but before decryption. In this framework, signal processing operations can be directly applied to encrypted signals without decrypting of encrypted signals, whereas the ordinary framework encrypts signals for transmission and/or storing but it decrypts them before signal processing operations are applied to. The described framework befits contemporary cloud computing in which not only transmission but also storing and processing are done in the public Internet. Addition to brief survey, two tangible application scenarios are also demonstrated in this paper where a new signal processing algorithm is introduced each.

Book ChapterDOI
01 Jan 1970
TL;DR: No consensus on the meaning of image processing exists, and to some, it is the conversion of one image to another image in which details are more easily discernible or which is more pleasing in some visual sense.
Abstract: Almost everyone agrees on what is meant by data processing. Because an image is simply a two-dimensional array of data, no further definition of image processing would appear to be required. But, in fact, no consensus on the meaning of image processing exists. To some, it is the conversion of one image to another image in which details are more easily discernible or which is more pleasing in some visual sense. To some, it is the encoding, transmission, and decoding of image data within the limits of available bandwidth. To others, it is the series of decisions performed in order to classify the contents of an image in accordance with some criterion for the “recognition of patterns.”

Journal ArticleDOI
TL;DR: A tool is proposed that integrates a modular graphical interface for image processing and an expert system shell generator that provides three knowledge representation formalisms, forward and backward control strategies, and several conflict resolution methods.
Abstract: A tool is proposed that integrates a modular graphical interface for image processing and an expert system shell generator. The tool provides three knowledge representation formalisms (logic, frames and semantic nets), forward and backward control strategies, and several conflict resolution methods. These features can be combined to construct expert system shells with different levels of complexity. The tool is intended to facilitate the implementation of expert systems in the domain of image processing, and to be used as a teaching and research laboratory in knowledge representation and expert system architecture.

Journal ArticleDOI
TL;DR: This system enables the soundness and durability of the tunnel wall more accurately than previously and is completed an original image processing database system and a graphical user interface for convenient and easy operation.
Abstract: Inspection system of railway facilities under development since 1991 using continuous scan image (CSI) to be taken by a linear sensor camera, has reached a practical application stage. Many railway facilities are so extensive that this system will provide an effective tool of taking those images. We report especially on the tunnel inspection system in this paper. The structural soundness of a railway tunnel is currently inspected and maintained via the visual monitoring of deformations such as a cracking on the wall. Actually, however, work environment is so bad and monitoring area so extensive that it is very difficult to follow all deformations developing on the wall to the last crack. We completed an original image processing database system and a graphical user interface for convenient and easy operation. This system enables us to diagnose the soundness and durability of the tunnel wall more accurately than previously.

Journal ArticleDOI
01 Jan 1970
TL;DR: In this article, a system for detecting states of distraction in drivers during daylight hours using machine vision techniques, which is based on the image segmentation of the eyes and mouth of a person, with a front-face-view camera, was presented.
Abstract: This article presents a system for detecting states of distraction in drivers during daylight hours using machine vision techniques, which is based on the image segmentation of the eyes and mouth of a person, with a front-faceview camera From said segmentation states of motion of the mouth and head are established, thus allowing to infer the corresponding state of distraction Images are extracted from short videos with a resolution of 640x480 pixels and image processing techniques such as color space transformation and histogram analysis are applied A decision concerning the state of the driver is the result from a multilayer perceptron-type neural network with all extracted features as inputs Achieved performance is 90% for a controlled environment screening test and 86% in real environment, with an average response time of 30 ms

Journal ArticleDOI
TL;DR: A laser triangulation inspection system which has been integrated into an automatic stitching machine used to stitch overlapping components to locate defects that can occur prior to and during the stitching process so that the stitch path can be modified accordingly.
Abstract: This paper describes a laser triangulation inspection system which has been integrated into an automatic stitching machine used to stitch overlapping components. The inspection task locates defects that can occur prior to and during the stitching process so that the stitch path can be modified accordingly. The three-dimensional shape of the surface and component edge positions are extracted through the analysis of the laser line images captured by a high frame rate camera. Different stages of the data collection aspect of the inspection process are presented. These include calibration and laser line image processing techniques. The data modelling of the extracted edge points is based on least-square polynomial regression and parametric curve approximation. The obtained data model is used for stitch path planning. These techniques were tested on an automatic stitching machine.

Journal ArticleDOI
TL;DR: A large-aperture optical system may be operated as a multi-element interferometer such that a posteriori image processing can yield images that are free from the usual distortions of atmospheric “seeing”.

Journal ArticleDOI
01 Jan 1970
TL;DR: Experimental results show that an image and depth transmission system is more suitable than an all multi view transmission and an only free viewpoint image transmission system and predicting the opposite depth map is the best for live communication.
Abstract: In this paper, we propose systems which can render free viewpoint images by using depth image based rendering for live video communications. Experimental results show that an image and depth transmission system is more suitable than an all multi view transmission and an only free viewpoint image transmission system. Especially in the image and depth transmission system, transmitting two images and one depth map, and then predicting the opposite depth map is the best for live communication.

Proceedings ArticleDOI
01 Jun 1970
TL;DR: The one-dimensional line-scan analysis proposed by Beall is extended to encompass a two-dimensional sampling process for evaluating the image transfer in a fiber bundle and it is indicated that there is a significant difference in resolution between the static case and the dynamically-scanned case.
Abstract: A statistical analysis of degradation in a scanned image has been carried out by Beall. (Ref. 1) In the above analysis, the mean-squared value of the error function between the object and image was used as a measure of the degree of fidelity in the image, and numerical data have been published for several particular cases. A similar approach to image transfer in optical fiber bundles is presented. In this paper, the one-dimensional line-scan analysis proposed by Beall is extended to encompass a two-dimensional sampling process for evaluating the image transfer in a fiber bundle. Interesting results occur when the evaluation technique is extended to the comparison of the imaging properties of a static bundle and a bundle which is scanned dynamically. (Ref. 2) This analysis indicated that there is a significant difference in resolution between the static case and the dynamically-scanned case. The difference is found to be largely determined by the fiber configuration within the bundle. Experimental results in the case of the static bundle will be presented.© (1970) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
01 Jan 1970
TL;DR: In order to reduce and constraint the space of reconstructed image, the frequency domain Tikhonov regularization technique is employed and it is shown that the quality of the reconstructed image is much better compared to the traditional algorithm under the noisy environment.
Abstract: In these recent years, Compressive Sensing (CS) is becoming an attractive topic in the field of Information Theory. It is widely used in several area including networking, image processing and digital camera. In particular, image reconstruction based on small number of measured components is known as the most useful application. In this paper, SL0 algorithm is specially used for the reconstruction process. It significantly decrease the processing time by utilizing a matrix in which the number of row is much smaller than number of column. Therefore, SL0 is known as one of the fastest and most accurate algorithm in CS. However due to ill-posed condition, if the prior information of the original image is undetermined, the reconstruction procedure of SL0 is much affected by the noise. Unfortunately, the investigation for solving this SL0 ill-posed condition is very limited therefore SL0 is not widely applied in many application. Consequently, this paper proposes a novel regularization technique for SL0 algorithm in the frequency domain. In order to reduce and constraint the space of reconstructed image, the frequency domain Tikhonov regularization technique is employed. It is shown that the quality of the reconstructed image is much better compared to the traditional algorithm under the noisy environment. The experimental result is exclusively simulated for 3 images: Lena, Sussie and Cameraman under both Gaussian and Non-Gaussian noise models (such as AWGN, Poisson noise, Salt & Pepper noise and Speckle noise) at different noise powers.

Journal ArticleDOI
TL;DR: In this article, the performance characteristics of typical image transfer systems such as microscope lenses, process lenses, IR trackers and fibre face plates are presented by way of example, and all the techniques described have been developed over the last ten years to satisfy the special requirements of both manufacturers and users of electro-optical systems.
Abstract: The performance of an optical system can be specified by the OTF or by means of a new method employing pupil-scan-ning Whilst the former technique is most suitable when an image is transferred through a chain of units operating in sequence, the latter system, was developed for testing high performance trackers for which a maximum of energy concentration is required. Instruments for measuring the OTF, displaying spot diagrams, transverse ray aberrations and wavefront aberrations, are compared and contrasted. The performance characteristics of typical image transfer systems such as microscope lenses, process lenses, IR trackers and fibre face plates are presented by way of example. All the techniques described have been developed over the last ten years to satisfy the special requirements of both manufacturers and users of electro-optical systems.

Proceedings ArticleDOI
E. Arthurs1, W. S. Bartlett1, D. J. Ladd1, R. L. Salmon1, J. H. Whipple1 
05 May 1970
TL;DR: Picturelab is an interactive program system for experimentation in picture processing built using the SIS facility on a GE 635 as a replacement for a previous batch facility.
Abstract: Picturelab is an interactive program system for experimentation in picture processing built using the SIS facility on a GE 635. Picturelab has been prepared as a replacement for a previous batch facility. The batch facility had the major drawback that most studies had to be run overnight. When graphic results were required microfilm processing time was added to the already long turn-around time. Since most present research in image processing continues to be done on a trial-and-error basis, the long turn-around times associated with the batch runs resulted in a combination of painfully slow progress and long running times.

Patent
14 Apr 1970