scispace - formally typeset
Search or ask a question

Showing papers on "Image processing published in 1984"


Journal ArticleDOI
TL;DR: A fast parallel thinning algorithm that consists of two subiterations: one aimed at deleting the south-east boundary points and the north-west corner points while the other one is aimed at deletion thenorth-west boundarypoints and theSouth-east corner points.
Abstract: A fast parallel thinning algorithm is proposed in this paper It consists of two subiterations: one aimed at deleting the south-east boundary points and the north-west corner points while the other one is aimed at deleting the north-west boundary points and the south-east corner points End points and pixel connectivity are preserved Each pattern is thinned down to a skeleton of unitary thickness Experimental results show that this method is very effective 12 references

2,243 citations


Journal ArticleDOI
TL;DR: There are several image segmentation techniques, some considered general purpose and some designed for specific classes of images as discussed by the authors, some of which can be classified as: measurement space guided spatial clustering, single linkage region growing schemes, hybrid link growing scheme, centroid region growing scheme and split-and-merge scheme.
Abstract: There are now a wide Abstract There are now a wide variety of image segmentation techniques, some considered general purpose and some designed for specific classes of images. These techniques can be classified as: measurement space guided spatial clustering, single linkage region growing schemes, hybrid linkage region growing schemes, centroid linkage region growing schemes, spatial clustering schemes, and split-and-merge schemes. In this paper, we define each of the major classes of image segmentation techniques and describe several specific examples of each class of algorithm. We illustrate some of the techniques with examples of segmentations performed on real images.

2,009 citations


Journal ArticleDOI
TL;DR: The3-D fractal model provides a characterization of 3-D surfaces and their images for which the appropriateness of the model is verifiable and this characterization is stable over transformations of scale and linear transforms of intensity.
Abstract: This paper addresses the problems of 1) representing natural shapes such as mountains, trees, and clouds, and 2) computing their description from image data. To solve these problems, we must be able to relate natural surfaces to their images; this requires a good model of natural surface shapes. Fractal functions are a good choice for modeling 3-D natural surfaces because 1) many physical processes produce a fractal surface shape, 2) fractals are widely used as a graphics tool for generating natural-looking shapes, and 3) a survey of natural imagery has shown that the 3-D fractal surface model, transformed by the image formation process, furnishes an accurate description of both textured and shaded image regions. The 3-D fractal model provides a characterization of 3-D surfaces and their images for which the appropriateness of the model is verifiable. Furthermore, this characterization is stable over transformations of scale and linear transforms of intensity. The 3-D fractal model has been successfully applied to the problems of 1) texture segmentation and classification, 2) estimation of 3-D shape information, and 3) distinguishing between perceptually ``smooth'' and perceptually ``textured'' surfaces in the scene.

1,919 citations


Journal Article
TL;DR: It is proposed that, for the task of object recognition, the visual system decomposes shapes into parts using a rule defining part boundaries rather than part shapes, that the rule exploits a uniformity of nature—transversality, and that parts with their descriptions and spatial relations provide a first index into a memory of shapes.

1,271 citations


01 Nov 1984
TL;DR: A variety of pyramid methods that have been developed for image data compression, enhancement, analysis and graphics that are versatile, convenient, and efficient to use.
Abstract: The data structure used to represent image information can be critical to the successful completion of an image processing task. One structure that has attracted considerable attention is the image pyramid This consists of a set of lowpass or bandpass copies of an image, each representing pattern information of a different scale. Here we describe a variety of pyramid methods that we have developed for image data compression, enhancement, analysis and graphics. ©1984 RCA Corporation Final manuscript received November 12, 1984 Reprint Re-29-6-5 that can perform most of the routine visual tasks that humans do effortlessly. It is becoming increasingly clear that the format used to represent image data can be as critical in image processing as the algorithms applied to the data. A digital image is initially encoded as an array of pixel intensities, but this raw format i s not suited to most tasks. Alternatively, an image may be represented by its Fourier transform, with operations applied to the transform coefficients rather than to the original pixel values. This is appropriate for some data compression and image enhancement tasks, but inappropriate for others. The transform representation is particularly unsuited for machine vision and computer graphics, where the spatial location of pattem elements is critical. Recently there has been a great deal of interest in representations that retain spatial localization as well as localization in the spatial—frequency domain. This i s achieved by decomposing the image into a set of spatial frequency bandpass component images. Individual samples of a component image represent image pattern information that is appropriately localized, while the bandpassed image as a whole represents information about a particular fineness of detail or scale. There is evidence that the human visual system uses such a representation, and multiresolution schemes are becoming increasingly popular in machine vision and in image processing in general. The importance of analyzing images at many scales arises from the nature of images themselves. Scenes in the world contain objects of many sizes, and these objects contain features of many sizes. Moreover, objects can be at various distances from the viewer. As a result, any analysis procedure that is applied only at a single scale may miss information at other scales. The solution is to carry out analyses at all scales simultaneously. Convolution is the basic operation of most image analysis systems, and convolution with large weighting functions is a notoriously expensive computation. In a multiresolution system one wishes to perform convolutions with kernels of many sizes, ranging from very small to very large. and the computational problems appear forbidding. Therefore one of the main problems in working with multiresolution representations is to develop fast and efficient techniques. Members of the Advanced Image Processing Research Group have been actively involved in the development of multiresolution techniques for some time. Most of the work revolves around a representation known as a "pyramid," which is versatile, convenient, and efficient to use. We have applied pyramid-based methods to some fundamental problems in image analysis, data compression, and image manipulation.

1,158 citations


Journal ArticleDOI
TL;DR: This work examines the relationship between phase and amplitude in the case of alphanumeric characters, with and without noise, using a computer simulation and compares the phase-only and amplitude-only filters to the classical matched filter using the criteria of discrimination, correlation peak, and optical efficiency.
Abstract: From image processing work, we know that the phase information is significantly more important than amplitude information in preserving the features of a visual scene. Is the same true in the case of a matched filter? From previous work [ J. L. Horner , Appl. Opt.21, 4511( 1982)], we know that a pure phase correlation filter can have an optical efficiency of 100% in an optical correlation system. We examine this relationship between phase and amplitude in the case of alphanumeric characters, with and without noise, using a computer simulation. We compare the phase-only and amplitude-only filters to the classical matched filter using the criteria of discrimination, correlation peak, and optical efficiency. Three-dimensional plots of the autocorrelation and cross-correlation functions are presented and discussed.

939 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that seven point correspondences are sufficient to uniquely determine from two perspective views the three-dimensional motion parameters (within a scale factor for the translations) of a rigid object with curved surfaces.
Abstract: Two main results are established in this paper. First, we show that seven point correspondences are sufficient to uniquely determine from two perspective views the three-dimensional motion parameters (within a scale factor for the translations) of a rigid object with curved surfaces. The seven points should not be traversed by two planes with one plane containing the origin, nor by a cone containing the origin. Second, a set of ``essential parameters'' are introduced which uniquely determine the motion parameters up to a scale factor for the translations, and can be estimated by solving a set of linear equations which are derived from the correspondences of eight image points. The actual motion parameters can subsequently be determined by computing the singular value decomposition (SVD) of a 3×3 matrix containing the essential parameters.

915 citations


Journal ArticleDOI
01 Dec 1984
TL;DR: The extended Gaussian image is defined and some of its properties discussed, an elaboration for nonconvex objects is presented and several examples are shown.
Abstract: This is a primer on extended Gaussian images Extended Gaussian images are useful for representing the shapes of surfaces They can be computed easily from: 1 needle maps obtained using photometric stereo; or 2 depth maps generated by ranging devices or binocular stereo Importantly, they can also be determined simply from geometric models of the objects Extended Gaussian images can be of use in at least two of the tasks facing a machine vision system: 1 recognition, and 2 determining the attitude in space of an object Here, the extended Gaussian image is defined and some of its properties discussed An elaboration for nonconvex objects is presented and several examples are shown

738 citations


BookDOI
01 Jan 1984
TL;DR: A Hierarchical Image Analysis System Based Upon Oriented Zero Crossings of Bandpassed Images and a Tutorial on Quadtree Research.
Abstract: I Image Pyramids and Their Uses.- 1. Some Useful Properties of Pyramids.- 2. The Pyramid as a Structure for Efficient Computation.- II Architectures and Systems.- 3. Multiprocessor Pyramid Architectures for Bottom-Up Image Analysis.- 4. Visual and Conceptual Hierarchy - A Paradigm for Studies of Automated Generation of Recognition Strategies.- 5. Multiresolution Processing.- 6. The Several Steps from Icon to Symbol Using Structured Cone/ Pyramids.- III Modelling, Processing, and Segmentation.- 7. Time Series Models for Multiresolution Images.- 8. Node Linking Strategies in Pyramids for Image Segmentation.- 9. Multilevel Image Reconstruction.- 10. Sorting, Histogramming, and Other Statistical Operations on a Pyramid Machine.- IV Features and Shape Analysis.- 11. A Hierarchical Image Analysis System Based Upon Oriented Zero Crossings of Bandpassed Images.- 12. A Multiresolution Representation for Shape.- 13. Multiresolution Feature Encodings.- 14. Multiple-Size Operators and Optimal Curve Finding.- V Region Representation and Surface Interpolation.- 15. A Tutorial on Quadtree Research.- 16. Multiresolution 3-d Image Processing and Graphics.- 17. Multilevel Reconstruction of Visual Surfaces: Variational Principles and Finite-Element Representations.- VI Time-Varying Analysis.- 18. Multilevel Relaxation in Low-Level Computer Vision.- 19. Region Matching in Pyramids for Dynamic Scene Analysis.- 20. Hierarchical Estimation of Spatial Properties from Motion.- VII Applications.- 21. Multiresolution Microscopy.- 22. Two-Resolution Detection of Lung Tumors in Chest Radiographs.- Index of Contributors.

623 citations


Journal ArticleDOI
TL;DR: Algorithms to recover illuminant direction and estimate surface orientation have been evaluated on both natural and synthesized images, and have been found to produce useful information about the scene.
Abstract: Local analysis of image shading, in the absence of prior knowledge about the viewed scene, may be used to provide information about the scene. The following has been proved. Every image point has the same image intensity and first and second derivatives as the image of some point on a Lambertian surface with principal curvatures of equal magnitude. Further, if the principal curvatures are assumed to be equal there is a unique combination of image formation parameters (up to a mirror reversal) that will produce a particular set of image intensity and first and second derivatives. A solution for the unique combination of surface orientation, etc., is presented. This solution has been extended to natural imagery by using general position and regional constraints to obtain estimates of the following: ? surface orientation at each image point; ? the qualitative type of the surface, i.e., whether the surface is planar, cylindrical, convex, concave, or saddle; ? the illuminant direction within a region. Algorithms to recover illuminant direction and estimate surface orientation have been evaluated on both natural and synthesized images, and have been found to produce useful information about the scene.

412 citations


Journal ArticleDOI
01 Jan 1984
TL;DR: The morphological skeleton is shown to unify many previous approaches to skeletonization, and some of its theoretical properties are investigated.
Abstract: This paper presents the results of a study on the use of morphological set operations to represent and encode a discrete binary image by parts of its skeleton, a thinned version of the image containing complete information about its shape and size. Using morphological erosions and openings, a finite image can be uniquely decomposed into a finite number of skeleton subsets and then the image can be exactly reconstructed by dilating the skeleton subsets. The morphological skeleton is shown to unify many previous approaches to skeletonization, and some of its theoretical properties are investigated. Fast algorithms that reduce the original quadratic complexity to linear are developed for skeleton decomposition and reconstruction. Partial reconstructions of the image are quantified through the omission of subsets of skeleton points. The concepts of a globally and locally minimal skeleton are introduced and fast algorithms are developed for obtaining minimal skeletons. For images containing blobs and large areas, the skeleton subsets are much thinner than the original image. Therefore, encoding of the skeleton information results in lower information rates than optimum block-Huffman or optimum runlength-Huffman coding of the original image. The highest level of image compression was obtained by using Elias coding of the skeleton.

Journal ArticleDOI
TL;DR: Two methods are developed to reduce the blocking effect when the image is reconstructed at the decoder due to discontinuities between the subimages in transform coding.
Abstract: In some important image coding techniques, such as transform coding, an image is first divided into subimages, and then each subimage is coded independently. The segmentation procedure has significant advantages, but when used in a low bit rate scheme, an undesirable side effect can occur. Specifically, when the image is reconstructed at the decoder, a "blocking effect" can develop due to discontinuities between the subimages. Two methods are developed to reduce the blocking effect. The performance of these methods when applied to a discrete cosine transform image coder is discussed.

Journal ArticleDOI
TL;DR: In this paper, the use of a binary, magneto-optic spatial light modulator as an input device and as a spatial filter in a VanderLugt correlator is investigated.
Abstract: The use of a binary, magneto-optic spatial light modulator as an input device and as a spatial filter in a VanderLugt correlator is investigated. The statistics of the correlation that is obtained when the input image or the spatial filter is thresholded are estimated. Optical correlation using the magneto-optic device at the input and Fourier planes of a VanderLugt correlator is demonstrated experimentally.

DOI
01 Oct 1984
TL;DR: In this paper, the authors used the maximum entropy method for reconstructing images from many types of data, such as optical deconvolutions and tomographic reconstructions, and showed that it has a privileged position as the only consistent method for combining different data into a single image.
Abstract: Maximum entropy has proved to be an enormously powerful tool for reconstructing images from many types of data. It has a privileged position as the only consistent method for combining different data into a single image. It has been used most spectacularly in radio astronomical interferometry, where it deals routinely with images of up to a million pixels, and high dynamic range. We also give examples of optical deconvolutions and tomographic reconstructions to illustrate the generality of application and the quality of maximum entropy images. Some types of data, a such as Fourier intensitites, are inadepquate in themselves to produce a good image. The maximum entropy method allows us to incorporate extra, Prior knowledge about the object being imaged, and we give examples of this technique being used in specectroscopy. The nonlinearities inherent in the a state-of-the-art example of `blind' deconvolution in which an unknown object is blurred with an unknown point-spread-function: maximum entropy can recover both.

Journal ArticleDOI
TL;DR: A nonlinear temporal filtering algorithm using motion compensation for reducing noise in image sequences is shown to be successful in improving image quality and also improving the efficiency of subsequent image coding operations.
Abstract: Noise in television signals degrades both the image quality and the performance of image coding algorithms. This paper describes a nonlinear temporal filtering algorithm using motion compensation for reducing noise in image sequences. A specific implementation for NTSC composite television signals is described, and simulation results on several video sequences are presented. This approach is shown to be successful in improving image quality and also improving the efficiency of subsequent image coding operations.

Journal ArticleDOI
TL;DR: An adaptive neighborhood feature enhancement technique that enhances visibility of objects and details in an image that should aid diagnosis of breast cancer without requiring additional x-ray dose such as for xeromammography.
Abstract: Digital techniques are presented for xerographylike enhancement of features in film mammograms. The mammographic image is first digitized using a procedure for gray scale dynamic range expansion. A pixel operator is then applied to the image, which performs contrast enhancement according to a specified function. The final transformation leads to either a positive or negative mode display as desired. We also present an adaptive neighborhood feature enhancement technique that enhances visibility of objects and details in an image. The availability of the enhanced images should aid diagnosis of breast cancer without requiring additional x-ray dose such as for xeromammography.

Journal ArticleDOI
TL;DR: This paper demonstrates that a meaningful system response can be calculated by averaging over an ensemble of point-source system inputs to yield an MTF which accounts for the combined effects of image formation, sampling, and image reconstruction.
Abstract: Sampling generally causes the response of a digital imaging system to be locally shift-variant and not directly amenable to Modulation Transfer Function (MTF) analysis. However, this paper demonstrates that a meaningful system response can be calculated by averaging over an ensemble of point-source system inputs to yield an MTF which accounts for the combined effects of image formation, sampling, and image reconstruction. As an illustration, the MTF of the Landsat MSS system is analyzed to reveal an average effective instantaneous field of view which is significantly larger than the commonly accepted value, particularly in the along-track direction where undersampling contributes markedly to an MTF reduction and resultant increase in image blur.

Journal ArticleDOI
TL;DR: In this article, the effects of various digital parameters, such as sampling aperture, sampling distance, number of quantization levels, and display aperture, on the noise Wiener spectrum of digital radiographic imaging systems were investigated theoretically.
Abstract: We investigated theoretically the effects of various digital parameters, such as sampling aperture, sampling distance, number of quantization levels, and display aperture, on the noise Wiener spectrum of digital radiographic imaging systems. We also measured Wiener spectra for our digital image simulation/processing system, and the results agreed well with the theoretical predictions. Aliasing, which is an artifact caused by undersampling, and the use of a limited number of quantization levels were found to increase the Wiener spectrum for digital systems. The effects of image processing, including unsharp mask filtering, integration, and subtraction, on the Wiener spectrum were also demonstrated. Since noise influences the detectability of radiologic objects and thus diagnostic accuracy, knowledge of the effects of the various digital parameters on the noise spectrum will be useful in the evaluation and design of future digital imaging systems.

Book
11 Nov 1984
TL;DR: A canonical set of image processing problems that represent the class of functions typically required in most image processing applications is presented and the best techniques for this particular problem and how they work are addressed.
Abstract: Contributors Preface 1 Image Enhancement I Introduction II Enhancement Techniques III Further Comments IV Bibliographic Notes References 2 Image Restoration I Statement of the Problem II Direct Techniques of Image Restoration III Indirect Techniques of Image Restoration IV Identification of the Point Spread Function V Assessment of Techniques VI Bibliographical Notes References 3 Image Detection and Estimation I Introduction II Detecting Known Objects III Detecting Random Objects IV Estimating Random Curves V Conclusions VI Bibliographical Notes References 4 Image Reconstruction from Projections I Introduction II Computational Procedures for Image Reconstruction III The Theory of Filtered-Backprojection Algorithms IV The Theory of Algebraic Algorithms V Aliasing Artifacts VI Bibliographical Notes References 5 Image Data Compression I Introduction II Spatial Domain Methods III Transform Coding IV Hybrid Coding and Vector DCPM V Interframe Coding VI Coding of Graphics VII Applications VIII Bibliography References 6 Image Spectral Estimation I Introduction II Background III Techniques IV Summary V Bibliographical Notes References 7 Image Analysis I Introduction II Image Segmentation III Region Description and Segmentation IV Bibliographical Notes References 8 Image Processing Systems I Introduction II Current Context of Image Processing III System Hardware Architecture IV Image Processing Display Hardware V Image Processing Software VI Issues Involved in Evaluating Image Processing Systems VII Conclusion References Author Index Subject Index

Journal ArticleDOI
TL;DR: In this paper, two improved methods of computer image reconstruction are presented, one involves an approximate form of partial coherence that allows the use of fast Fourier transforms (FFT's) to reduce the required computer time.

Journal Article
TL;DR: It is concluded that two-dimensional digital image restoration with these techniques can produce a significant increase in SPECT image quality, with a small cost in processing time when these techniques are implemented on an array processor.
Abstract: Presently, single photon emission computed tomographic (SPECT) images are usually reconstructed by arbitrarily selecting a one-dimensional ''window'' function for use in reconstruction. A better method would be to automatically choose among a family of two-dimensional image restoration filters in such a way as to produce ''optimum'' image quality. Two-dimensional image processing techniques offer the advantages of a larger statistical sampling of the data for better noise reduction, and two-dimensional image deconvolution to correct for blurring during acquisition. An investigation of two such ''optimal'' digital image restoration techniques (the count-dependent Metz filter and the Wiener filter) was made. They were applied both as two-dimensional ''window'' functions for preprocessing SPECT images, and for filtering reconstructed images. Their performance was compared by measuring image contrast and per cent fractional standard deviation (% FSD) in multiple-acquisitions of the Jaszczak SPECT phantom at two different count levels. A statistically significant increase in image contrast and decrease in % FSD was observed with these techniques when compared to the results of reconstruction with a ramp filter. The adaptability of the techniques was manifested in a lesser % reduction in % FSD at the high count level coupled with a greater enhancement in image contrast. Using anmore » array processor, processing time was 0.2 sec per image for the Metz filter and 3 sec for the Wiener filter. It is concluded that two-dimensional digital image restoration with these techniques can produce a significant increase in SPECT image quality.« less

Journal Article
TL;DR: In this article, a region of interest (ROI) evaluation method for computed tomography has been developed, where projected regions are convolved and then used to form multiple vector inner products with the raw tomographic data sets.
Abstract: A new algorithm for region of interest evaluation in computed tomography has been developed. Region of interest evaluation is a technique used to improve quantitation of the tomographic imaging process by summing (or averaging) the reconstructed quantity throughout a volume of particular significance. An important application of this procedure arises in the analysis of dynamic emission computed tomographic data, wherein the uptake and clearance of radiotracers are used to determine the blood flow and/or physiologic function of tissue within the significant volume. The new algorithm replaces the conventional technique of repeated image reconstructions with one in which projected regions are convolved and then used to form multiple vector inner products with the raw tomographic data sets. Quantitation of regions of interest is made without the need for reconstruction of tomographic images. The computational advantage of the new algorithm over conventional methods is between a factor of 20 and a factor of 500 for typical application encountered in medical science studies. The greatest benefit of the new algorithm (and the motivation for its development) is the case with which the statistical uncertainty of the result is computed. The entire covariance matrix for the regions of interest can be calculated with relativelymore » few operations. Knowledge of the statistical uncertainties and correlations of the regions of interest results in a more efficient estimation of model parameters in the subsequent analysis of physiologic function from dynamic data.« less

Journal ArticleDOI
TL;DR: The results of some experiments on estimating the 3-D motion parameters of a rigid body from two consecutive TV images are described, and several factors which affect the accuracy of the results are discussed.
Abstract: This paper describes the results of some experiments on estimating the 3-D motion parameters of a rigid body from two consecutive TV images, and discusses several factors which affect the accuracy of the results. These factors include the sizes of the motion parameters themselves, the accuracy of the raw data, and the number of point correspondences. In addition, we address two related topics: determining corner positions to subpixel accuracy and matching point patterns with different scales.

Journal ArticleDOI
TL;DR: This dissertation describes the development of a new radiographic reconstruction method designated Ectomography that allows reconstruction of an arbitrarily thick layer of an object using limited viewing angle and estimation and filtering of local image information.

Journal ArticleDOI
TL;DR: In this paper, the quantum limits on simultaneous phase and squared-amplitude measurements made via optical heterodyne detection on a single-mode radiation field are established from a fully quantum mechanical treatment of heterodyning with ideal photon detectors.
Abstract: The quantum limits on simultaneous phase and squared-amplitude measurements made via optical heterodyne detection on a single-mode radiation field are established. The analysis proceeds from a fully quantum mechanical treatment of heterodyning with ideal photon detectors. A high mean field uncertainty principle is proven for simultaneous phase and squared-amplitude observations under the condition that the signal and image band states are independent, and the image band has zero mean. Operator representations are developed which show that no such principle applies when arbitrary signal/image band dependence is permitted, although the mean observations are no longer functions of the signal field alone. A multimode two-photon coherent state illustrating this behavior at finite energy is exhibited. Potential applications for the resulting improved accuracy measurements are briefly described.

Journal ArticleDOI
TL;DR: Improvement in detectability is ascribed to a reduction in the relative magnitude of the human observer's "internal" noise after image processing, which is accounted for by a statistical decision theory model that includes internal noise.
Abstract: Detection studies of simulated low-contrast radiographic patterns were performed with a high-quality digital image processing system. The original images, prepared with conventional screen-film systems, were processed digitally to enhance contrast by a "windowing" technique. The detectability of simulated patterns was quantified in terms of the results of observer performance experiments by using the multiple-alternative forced-choice method. The processed images demonstrated a significant increase in observer detection performance over that for the original images. These results are related to the displayed and perceived signal-to-noise ratios derived from signal detection theory. The improvement in detectability is ascribed to a reduction in the relative magnitude of the human observer's "internal" noise after image processing. The measured dependence of threshold signal contrast on object size and noise level is accounted for by a statistical decision theory model that includes internal noise.

Journal ArticleDOI
TL;DR: In this paper a new class of similarity measure is introduced, which is issued from the calculation of the number of sign changes in the scanned subtraction image, which leads to registration algorithms which are demonstrated to be far more robust than the methods currently in use.
Abstract: The computer comparison of two images requires a registration step which is usually performed by optimizing a similarity measure with respect to the registration parameters. In this paper a new class of similarity measure is introduced, which is issued from the calculation of the number of sign changes in the scanned subtraction image. Using these similarity measures for the registration of dissimilar images leads to registration algorithms which are demonstrated to be far more robust than the methods currently in use (maximization of the correlation coefficient or correlation function, minimization of the sum of the absolute values of the differences). Two medical applications of these image processing methods are presented: The registration of gamma ray images for change detection purpose and the alignment of X ray digitized images (without and with iodine contrast) for improving the quality of the subtraction angiographic images.

01 May 1984
TL;DR: In this article, a binocular-stereo-matching algorithm for making rapid visual range measurements in noisy images is described, which is developed for application to problems in robotics where noise tolerance, reliability and speed are predominant issues.
Abstract: : A binocular-stereo-matching algorithm for making rapid visual range measurements in noisy images is described. This technique is developed for application to problems in robotics where noise tolerance, reliability and speed are predominant issues. A high speed pipelined convolver for preprocessing images and an 'unstructured light' technique for improving signal quality are introduced to help enhance performance to meet the demands of this task domain. These optimizations, however, are not sufficient. A closer examination of the problems encountered suggests that broader interpretations of both the objective of binocular stereo and of the zero-crossing theory of Marr and Poggio are required. In this paper, we restrict ourselves to the problem of making a single primitive surface measurement. For example, to determine whether or not a specified volume of space is occupied, to measure the range to a surface at an indicated image location, or to determine the elevation gradient at that position. In this framework, we make a subtle but important shift from the explicit use of zero-crossing contours (in band-pass filtered images) as the elements matched between left and right images, to the use of the signs between zero-crossings. With this change, we obtain a simpler algorithm with a reduced sensitivity to noise and a more predictable behavior. The PRISM system incorporates this algorithm with the unstructured light technique and a high speed digital convolver. It has been used successfully by others as a sensor in a path planning system and a bin picking system.

Journal ArticleDOI
TL;DR: The present image processing system is analyzed for linearity of detection, light scattering artifacts, signal to noise ratio, standard curves, and spatial resolution, and the results obtained are shown to be comparable to the results from standard microspectrofluorometry.
Abstract: An interface of our microspectrofluorometer with an image processing system performs microspectrofluorometric measurements in living cells by digital image processing. Fluorescence spectroscopic parameters can be measured by digital image processing directly from microscopic images of cells, and are automatically normalized for pathlength and accessible volume. Thus, an accurate cytoplasmic "map" of various spectroscopic parameters can be produced. The resting cytoplasmic pH of fibroblasts (3T3 cells) has been determined by measuring the ratio of fluorescein fluorescence exited by two successive wavelengths (489 and 452 nm). Fluorescein-labeled dextran microinjected into the cells is used as a pH indicator, since it is trapped in the cytoplasm but is excluded from the nucleus and other organelles. The average cytoplasmic pH is 6.83 (+/- 0.38). However, cytoplasmic pH exhibits a nonunimodal distribution, the lower mean pH being 6.74 (+/- 0.23). When 3T3 cells pinocytose medium containing fluorescein dextran, pinosomes peripheral to the nucleus exhibit a lower pH than those closer to the ruffling edge of the cell. The present image processing system is analyzed for linearity of detection, light scattering artifacts, signal to noise ratio, standard curves, and spatial resolution. The results obtained from digital image analysis are shown to be comparable to the results from standard microspectrofluorometry. We also discuss several other applications of this ratio imaging technique in cell biology.

Journal ArticleDOI
TL;DR: Application of image processing for the visually impaired is discussed and the effectiveness of adaptive image enhancement for printed pictures is demonstrated using an optically simulated cataractous lens.
Abstract: Application of image processing for the visually impaired is discussed. Image degradation in the low vision patient's visual system can be specified as a transfer function obtained by measurements of contrast sensitivity. The effectiveness of adaptive image enhancement for printed pictures is demonstrated using an optically simulated cataractous lens.