scispace - formally typeset
Search or ask a question

Showing papers on "Image processing published in 1986"


Book
01 Mar 1986
TL;DR: Robot Vision as discussed by the authors is a broad overview of the field of computer vision, using a consistent notation based on a detailed understanding of the image formation process, which can provide a useful and current reference for professionals working in the fields of machine vision, image processing, and pattern recognition.
Abstract: From the Publisher: This book presents a coherent approach to the fast-moving field of computer vision, using a consistent notation based on a detailed understanding of the image formation process. It covers even the most recent research and will provide a useful and current reference for professionals working in the fields of machine vision, image processing, and pattern recognition. An outgrowth of the author's course at MIT, Robot Vision presents a solid framework for understanding existing work and planning future research. Its coverage includes a great deal of material that is important to engineers applying machine vision methods in the real world. The chapters on binary image processing, for example, help explain and suggest how to improve the many commercial devices now available. And the material on photometric stereo and the extended Gaussian image points the way to what may be the next thrust in commercialization of the results in this area. Chapters in the first part of the book emphasize the development of simple symbolic descriptions from images, while the remaining chapters deal with methods that exploit these descriptions. The final chapter offers a detailed description of how to integrate a vision system into an overall robotics system, in this case one designed to pick parts out of a bin. The many exercises complement and extend the material in the text, and an extensive bibliography will serve as a useful guide to current research. Errata (164k PDF)

3,783 citations


Book
01 Jan 1986

3,039 citations


Journal ArticleDOI
TL;DR: In this article, an optimal spectrum extraction procedure is described, and examples of its performance with CCD data are presented, which delivers the maximum possible signal-to-noise ratio while preserving spectrophotometric accuracy.
Abstract: An optimal spectrum extraction procedure is described, and examples of its performance with CCD data are presented. The algorithm delivers the maximum possible signal-to-noise ratio while preserving spectrophotometric accuracy. The effects of moderate geometric distortion and of cosmic-ray hits on the spectrum are automatically accounted for. In tests with background-noise limited CCD spectra, optimal extraction offers a 70-percent gain in effective exposure time in comparison with conventional extraction procedures.

1,779 citations


Book
01 Jan 1986

1,745 citations


Proceedings ArticleDOI
13 Oct 1986
TL;DR: The IRAF system provides a good selection of programs for general image processing and graphics applications, plus a large selection ofprograms for the reduction and analysis of optical astronomy data.
Abstract: The Image Reduction and Analysis Facility (IRAF) is a general purpose software system for the reduction and analysis of scientific data. The IRAF system provides a good selection of programs for general image processing and graphics applications, plus a large selection of programs for the reduction and analysis of optical astronomy data. The system also provides a complete modern scientific programming environment, making it straightforward for institutions using IRAF to add their own software to the system. Every effort has been made to make the system as portable and device independent as possible, so that the system may be used on a wide variety of host computers and operating systems with a wide variety of graphics and image display devices.

1,560 citations


Journal ArticleDOI
TL;DR: An approach is presented for the estimation of object motion parameters based on a sequence of noisy images that may be of use in situations where it is difficult to resolve large numbers of object match points, but relatively long sequences of images are available.
Abstract: An approach is presented for the estimation of object motion parameters based on a sequence of noisy images. The problem considered is that of a rigid body undergoing unknown rotational and translational motion. The measurement data consists of a sequence of noisy image coordinates of two or more object correspondence points. By modeling the object dynamics as a function of time, estimates of the model parameters (including motion parameters) can be extracted from the data using recursive and/or batch techniques. This permits a desired degree of smoothing to be achieved through the use of an arbitrarily large number of images. Some assumptions regarding object structure are presently made. Results are presented for a recursive estimation procedure: the case considered here is that of a sequence of one dimensional images of a two dimensional object. Thus, the object moves in one transverse dimension, and in depth, preserving the fundamental ambiguity of the central projection image model (loss of depth information). An iterated extended Kalman filter is used for the recursive solution. Noise levels of 5-10 percent of the object image size are used. Approximate Cramer-Rao lower bounds are derived for the model parameter estimates as a function of object trajectory and noise level. This approach may be of use in situations where it is difficult to resolve large numbers of object match points, but relatively long sequences of images (10 to 20 or more) are available.

515 citations


Journal ArticleDOI
TL;DR: A system that takes a gray level image as input, locates edges with subpixel accuracy, and links them into lines and notes that the zero-crossings obtained from the full resolution image using a space constant ¿ for the Gaussian, are very similar, but the processing times are very different.
Abstract: We present a system that takes a gray level image as input, locates edges with subpixel accuracy, and links them into lines. Edges are detected by finding zero-crossings in the convolution of the image with Laplacian-of-Gaussian (LoG) masks. The implementation differs markedly from M.I.T.'s as we decompose our masks exactly into a sum of two separable filters instead of the usual approximation by a difference of two Gaussians (DOG). Subpixel accuracy is obtained through the use of the facet model [1]. We also note that the zero-crossings obtained from the full resolution image using a space constant ? for the Gaussian, and those obtained from the 1/n resolution image with 1/n pixel accuracy and a space constant of ?/n for the Gaussian, are very similar, but the processing times are very different. Finally, these edges are grouped into lines using the technique described in [2].

502 citations


Journal ArticleDOI
TL;DR: In this paper, a series of one-dimensional surfaces are fit to each window and the surface description is accepted, which is adequate in the least square sense and has the fewest parameters.
Abstract: An edge in an image corresponds to a discontinuity in the intensity surface of the underlying scene. It can be approximated by a piecewise straight curve composed of edgels, i.e., short, linear edge-elements, each characterized by a direction and a position. The approach to edgel-detection here, is to fit a series of one-dimensional surfaces to each window (kernel of the operator) and accept the surface-description which is adequate in the least squares sense and has the fewest parameters. (A one-dimensional surface is one which is constant along some direction.) The tanh is an adequate basis for the stepedge and its combinations are adequate for the roofedge and the line-edge. The proposed method of step-edgel detection is robust with respect to noise; for (step-size/?noise) ? 2.5, it has subpixel position localization (?position < ?) and an angular localization better than 10°; further, it is designed to be insensitive to smooth shading. These results are demonstrated by some simple analysis, statistical data, and edgelimages. Also included is a comparison of performance on a real image, with a typical operator (Difference-of-Gaussians). The results indicate that the proposed operator is superior with respect to detection, localization, and resolution.

495 citations


Book
17 Apr 1986
TL;DR: Results of investigations, both experimental and theoretical, are presented into the effectiveness of fuzzy algorithms as classification tools in some problems concerned with the field of pattern recognition and image processing.
Abstract: This book aims to present results of investigations, both experimental and theoretical, into the effectiveness of fuzzy algorithms as classification tools in some problems concerned with the field of pattern recognition and image processing. Compares results to those obtained with statistical classification techniques.

472 citations


Journal ArticleDOI
TL;DR: The evaluation of a prototype dual-energy implementation using rapid kVp switching on a clinical computed tomographic scanner is reported, which employs prereconstruction basis material decomposition of the dual- energy projection data.
Abstract: We report the evaluation of a prototype dual-energy implementation using rapid kVp switching on a clinical computed tomographic scanner. The method employs prereconstruction basis material decomposition of the dual-energy projection data. Each dual-energy scan can be processed into conventional single-kVp images, basis material density images, and monoenergetic images. Phantom studies were carried out to qualitatively and quantitatively evaluate and validate the approach.

471 citations


Journal ArticleDOI
TL;DR: This taxonomy helps add insight to the development of remote sensing theory and point the way to new, productive areas of research.

Book
01 Jan 1986
TL;DR: This chapter discusses three-Dimensional Shape Representation, Relational Matching, and Machine Learning of Computer Vision Algorithms for 3D Perception of Dynamic Scenes.
Abstract: Principles of Computer Vision. Three-Dimensional Shape Representation. Three-Dimensional Shape Recovery from Line Drawings. Recovery of 3D Shape of Curved Objects. Surface Reflection Mechanism. Extracting Shape from Shading. Range Image Analysis. Stereo Vision. Machine Learning of Computer Vision Algorithms. Image Sequence Analysis for 3D Perception of Dynamic Scenes. Nonrigid Motion Analysis. Analysis and Synthesis of Human Movement. Relational Matching. Three-Dimensional Object Recognition. Fundamental Principles of Robot Vision. Chapter References.

Journal ArticleDOI
TL;DR: In this paper, the authors focus on the amount and nature of information about the image present in the measurements, which has become increasingly relevant with the growth of interferometry, where the data correspond to the Fourier transform of the image.
Abstract: Imaging the two-dimensional intensity distribution of the sky has always been an important part of astronomy. This is particularly true at present, a time when aperture synthesis mapping is firmly established in radio astronomy, charge-coupled devices are revolutionizing optical imaging, and X-ray-imaging cameras are being flown in space. Atmospheric irregularities, instrument aberrations, detector noise, and the diffraction limit all cause the observed image to deviate from the ideal one. Image restoration techniques have therefore had a long history. The field owes much to the classic papers of Bracewell & Roberts ( 12) and Fellgett & Linfoot (35), which focused attention on the amount and nature of information about the image present in the measurements. These ideas have become increasingly relevant with the growth of interferometry, where the data correspond to the Fourier transform of the image. The Michelson stellar interferometer (82) was an early application in optical astronomy. However, it is at radio frequencies where interferometry has proved most fruitful and where astronomers have had to face the problem

Journal ArticleDOI
TL;DR: A new nonlinear, space-variant filtering algorithm is proposed which smooths jagged edges without blurring them, and smooths out abrupt intensity changes in monotone areas.
Abstract: An important application of spatial filtering techniques is in the postprocessing of images degraded by coding. Linear, space-invariant filters are inadequate to reduce the noise produced by block coders. The noise in block coded images is correlated with the local characteristics of the signal, and such filters are unable to exploit this correlation to reduce the noise. We propose a new nonlinear, space-variant filtering algorithm which smooths jagged edges without blurring them, and smooths out abrupt intensity changes in monotone areas. Edge sharpness is preserved because near edges the filtering of the signal is negligible. Consequently, in-band noise is not reduced, but the well-known masking effect reduces the visibility of this in-band noise. The algorithm is only slightly more complex to implement than simple linear filtering. We present examples of processed images and SNR figures to demonstrate that a significant improvement in subjective and objective quality is achieved.

Journal ArticleDOI
TL;DR: A new approach to determination of mapping functions for registration of digital images is presented, where the images are divided into triangular regions by triangulating the control points and the overall mapping function is obtained by piecing together the linear mapping functions.

Journal ArticleDOI
TL;DR: Image descriptors based on the circular-Fourier-radial-Mellin transform are used for position, rotation, scale, and intensity-invariant multiclass pattern recognition and the influence of additive noise is investigated.
Abstract: Image descriptors based on the circular-Fourier-radial-Mellin transform are used for position-, rotation-, scale-, and intensity-invariant multiclass pattern recognition. The orders of the radial moments and of the circular harmonics are chosen to obtain an efficient image description. The first-order radial moments of three circular harmonics are sufficient to obtain a satisfactory recognition performance. The influence of additive noise is investigated. Experimental results are shown.

Journal ArticleDOI
TL;DR: An adaptive two-dimensional filter has been developed which uses local features of image texture to recognize and maximally low-pass filter those parts of the image which correspond to fully developed speckle, while substantially preserving information associated with resolved-object structure.

Book
01 Jan 1986
TL;DR: PASM, a large-scale multimicroprocessor system being designed at Purdue University for image processing and pattern recognition, is described and examples of how PASM can be used to perform image processing tasks are given.
Abstract: PASM, a large-scale multimicroprocessor system being designed at Purdue University for image processing and pattern recognition, is described. This system can be dynamically reconfigured to operate as one or more independent SIMD and/or MIMD machines. PASM consists of a parallel computation unit, which contains N processors, N memories, and an interconnection network; Q microcontrollers, each of which controls N/Q processors; N/Q parallel secondary storage devices; a distributed memory management system; and a system control unit, to coordinate the other system components. Possible values for N and Q are 1024 and 16, respectively. The control schemes and memory management in PASM are explored. Examples of how PASM can be used to perform image processing tasks are given.

Journal ArticleDOI
TL;DR: Image shifting provides a method of determining the direction of displacement, and hence the velocity, for all types of pulsed laser velocimeter, and it is capable of high performance.
Abstract: Image shifting provides a method of determining the direction of displacement, and hence the velocity, for all types of pulsed laser velocimeter. It is independent of the scattering properties of the particles and/or the intensity of the illumination of the first image with respect to the second image, and it is capable of high performance. With rotating mirror systems, image shifting can be used to offset negative velocities up to 10 m/s. With electrooptic systems, it is estimated that image shifting can be used at velocities up to 500 m/s.

Book ChapterDOI
01 Jan 1986
TL;DR: The subject of this book demonstrates how digital image-processing techniques can be used to produce information about the optical image that cannot be obtained in any other way.
Abstract: New discoveries in the life sciences are often linked to the development of unique optical tools that allow experimental material to be examined in new ways. We as microscopists are constantly searching for new techniques for extracting even more optical information from the material we work with, as the subject of this book aptly demonstrates. It is not surprising then that microscopists have begun to turn to computer technology in order to squeeze more information from their experimental images. Computer processing can be used to obtain numerical information from the microscope image that is more accurate, less time-consuming, and more reproducible than the same operations performed by other methods. Computer processing can be used to enhance the appearance of the microscope image, for example to increase contrast or to reduce noise, in ways that are difficult to duplicate using photographic or video techniques alone. When used to their fullest power, digital image-processing techniques can be used to produce information about the optical image that cannot be obtained in any other way.

Journal ArticleDOI
TL;DR: In this paper, the concept of invariance of information capacity is applied to the resolution of an optical system and methods of obtaining superresolution in microscopy are discussed, and scanning microscopy has many distinct advantages for such applications.
Abstract: The concept of invariance of information capacity is discussed and applied to the resolution of an optical system. Methods of obtaining superresolution in microscopy are discussed, and scanning microscopy has many distinct advantages for such applications.

Journal ArticleDOI
TL;DR: Examples demonstrate how symbolic substitution logic can be used to implement Boolean logic, binary arithmetic, cellular logic, and Turing machines.
Abstract: Symbolic substitution logic is based on optical pattern transformations This space-invariant mechanism is shown to be capable of supporting space-variant operations An optical implementation is proposed It is based on splitting an image, shifting the split images, superimposing the results, regenerating the superimposed image with an optical logic array, splitting the regenerated image, shifting the resulting images, and superimposing the shifted images Experimental results are presented Examples demonstrate how symbolic substitution logic can be used to implement Boolean logic, binary arithmetic, cellular logic, and Turing machines

Journal ArticleDOI
TL;DR: This procedure brings out the features in the image with little or no enhancement of the noise, and finds that adaptive Neighborhoods with surrounds whose width is a constant difference from the center yield improved enhancement over adaptive neighborhoods with a constant ratio of surround to center neighborhood widths.
Abstract: X-ray mammography is the only breast cancer detection technique presently available with proven efficacy. Mammographic detection of early breast cancer requires optimal radiological or image processing techniques. We present an image processing approach based on adaptive neighborhood processing with a new set of contrast enhancement functions to enhance mammographic features. This procedure brings out the features in the image with little or no enhancement of the noise. We also find that adaptive neighborhoods with surrounds whose width is a constant difference from the center yield improved enhancement over adaptive neighborhoods with a constant ratio of surround to center neighborhood widths.

Journal ArticleDOI
TL;DR: An optimal statistical parameter estimation technique is presented for the identification of unknown image and blur model parameters and the proposed algorithms constitute a generalization of previous work on blur identification in that they are able to locate the zero loci of the blurred image spectrum on the entire z 1 - z 2 plane.
Abstract: An optimal statistical parameter estimation technique is presented for the identification of unknown image and blur model parameters. The development leads to an autoregressive moving average (ARMA) model identification problem, where the image model coefficients define the AR part, and the blur parameters define the MA part. Conditional maximum-likelihood estimates of the unknown parameters are derived both in the absence and in the presence of observation noise. The proposed algorithms constitute a generalization of previous work on blur identification in that they are able to locate the zero loci of the blurred image spectrum on the entire z 1 - z 2 plane. Simulation results, as well as photographically blurred images processed with the proposed algorithms, are shown as examples.

Journal Article
TL;DR: Fournier theory deconvolution phase recovery reconstruction from projections Specke imaging and interferometry image processing system design program categories technical practicalities as mentioned in this paper, and Fournier Theory of Deconvolution Phase Recovery from projections
Abstract: Setting the scene Fournier theory deconvolution phase recovery reconstruction from projections Specke imaging and interferometry image processing system design program categories technical practicalities

Journal ArticleDOI
TL;DR: This paper details the design and implementation of ANGY, a rule-based expert system in the domain of medical image processing that identifies and isolates the coronary vessels while ignoring any nonvessel structures which may have arisen from noise, variations in background contrast, imperfect subtraction, and irrelevent anatomical detail.
Abstract: This paper details the design and implementation of ANGY, a rule-based expert system in the domain of medical image processing. Given a subtracted digital angiogram of the chest, ANGY identifies and isolates the coronary vessels, while ignoring any nonvessel structures which may have arisen from noise, variations in background contrast, imperfect subtraction, and irrelevent anatomical detail. The overall system is modularized into three stages: the preprocessing stage and the two stages embodied in the expert itself. In the preprocessing stage, low-level image processing routines written in C are used to create a segmented representation of the input image. These routines are applied sequentially. The expert system is rule-based and is written in OPS5 and LISP. It is separated into two stages: The low-level image processing stage embodies a domain-independent knowledge of segmentation, grouping, and shape analysis. Working with both edges and regions, it determines such relations as parallel and adjacent and attempts to refine the segmentation begun by the preprocessing. The high-level medical stage embodies a domain-dependent knowledge of cardiac anatomy and physiology. Applying this knowledge to the objects and relations determined in the preceding two stages, it identifies those objects which are vessels and eliminates all others.

Journal ArticleDOI
TL;DR: In this article, a two-wave coupling in a crystal of photorefractive BaTiO3 was analyzed for coherent image amplification by two-way coupling in the presence of two-dimensional optical channels.
Abstract: Coherent image amplification by two-wave coupling in a crystal of photorefractive BaTiO3 is analyzed. This amplifier is optimized with respect to such operational characteristics as gain versus amplified image quality and space-bandwidth product. Experimental results that demonstrate coherent image amplification of 4000, space-bandwidth product of 1,000,000, and gray-level dynamic range of greater than 100 are presented.

Journal ArticleDOI
TL;DR: It is suggested that knowledge of the associative structure of sensory messages, in the form of the unexpected coincidences that occur, may represent the beginning of the formation of a working model, or cognitive map, of the environment.

Journal ArticleDOI
Mehdi Hatamian1
TL;DR: A fast algorithm and its single chip VLSI implementation for generating moments of two-dimensional (2-D) digital images for real-time image processing applications is presented and the number of multiplications is reduced by more than 5 orders of magnitude.
Abstract: We present a fast algorithm and its single chip VLSI implementation for generating moments of two-dimensional (2-D) digital images for real-time image processing applications. Using this algorithm, the number of multiplications for computing 16 moments of a 512 × 512 image is reduced by more than 5 orders of magnitude compared to the direct implementation; the number of additions is reduced by a factor of 4. This also makes the software implementation extremely fast. Using the chip, 16 moments μp,q(p = 0, 1, 2, 3, q = 0, 1, 2, 3) of a 512 × 512 8 bits/pixel image can be calculated in real time (i.e., 30 frames per second). Each moment value is computed as a 64- bit integer. The basic building block of the algorithm is a single-pole digital filter implemented with a simple accumulator. These filters are cascaded together in both horizontal and vertical directions in a highly regular structure which makes it very suitable for VLSI implementation. The chip has been implemented in 2.5 μ CMOS technology, it occupies 6100 μm × 6100 μm of silicon area. The chip can also be used as a general cell in a systolic architecture for implementing 2-D transforms having polynomial basis functions.