scispace - formally typeset
Search or ask a question

Showing papers on "Image processing published in 1978"


Journal ArticleDOI
Hsieh Hou1, H. Andrews
TL;DR: Applications to image and signal processing include interpolation, smoothing, filtering, enlargement, and reduction, and experimental results are presented for illustrative purposes in two-dimensional image format.
Abstract: This paper presents the use of B-splines as a tool in various digital signal processing applications. The theory of B-splines is briefly reviewed, followed by discussions on B-spline interpolation and B-spline filtering. Computer implementation using both an efficient software viewpoint and a hardware method are discussed. Finally, experimental results are presented for illustrative purposes in two-dimensional image format. Applications to image and signal processing include interpolation, smoothing, filtering, enlargement, and reduction.

1,293 citations


01 Jan 1978
TL;DR: In this paper, the authors describe the traitement de donnees reference record created on 2005-06-20, modified on 2016-08-08 and used for remote sensing.
Abstract: Keywords: Remote sensing ; traitement de donnees Reference Record created on 2005-06-20, modified on 2016-08-08

1,149 citations


Journal ArticleDOI
20 Apr 1978-Nature
TL;DR: In this article, a technique for image reconstruction by a maximum entropy method is presented, which is sufficiently fast to be useful for large and complicated images and is applicable in spectroscopy, electron microscopy, X-ray crystallography, geophysics and virtually any type of optical image processing.
Abstract: Results are presented of a powerful technique for image reconstruction by a maximum entropy method, which is sufficiently fast to be useful for large and complicated images. Although our examples are taken from the fields of radio and X-ray astronomy, the technique is immediately applicable in spectroscopy, electron microscopy, X-ray crystallography, geophysics and virtually any type of optical image processing. Applied to radioastronomical data, the algorithm reveals details not seen by conventional analysis, but which are known to exist.

969 citations


Journal ArticleDOI
K. Tomiyasu1
01 May 1978
TL;DR: In this article, a synthetic aperture radar (SAR) is used to produce high-resolution two-dimensional images of mapped areas, where the amplitude and phase of received signals are collected for the duration of an integration time after which the signal is processed.
Abstract: A synthetic aperture radar (SAR) can produce high-resolution two-dimensional images of mapped areas. The SAR comprises a pulsed transmitter, an antenna, and a phase-coherent receiver. The SAR is borne by a constant velocity vehicle such as an aircraft or satellite, with the antenna beam axis oriented obliquely to the velocity vector. The image plane is defined by the velocity vector and antenna beam axis. The image orthogonal coordinates are range and cross range (azimuth). The amplitude and phase of the received signals are collected for the duration of an integration time after which the signal is processed. High range resolution is achieved by the use of wide bandwidth transmitted pulses. High azimuth resolution is achieved by focusing, with a signal processing technique, an extremely long antenna that is synthesized from the coherent phase history. The pulse repetition frequency of the SAR is constrained within bounds established by the geometry and signal ambiguity limits. SAR operation requires relative motion between radar and target. Nominal velocity values are assumed for signal processing and measurable deviations are used for error compensation. Residual uncertainties and high-order derivatives of the velocity which are difficult to compensate may cause image smearing, defocusing, and increased image sidelobes. The SAR transforms the ocean surface into numerous small cells, each with dimensions of range and azimuth resolution. An image of a cell can be produced provided the radar cross section of the cell is sufficiently large and the cell phase history is deterministic. Ocean waves evidently move sufficiently uniformly to produce SAR images which correlate well with optical photographs and visual observations. The relationship between SAR images and oceanic physical features is not completely understood, and more analyses and investigations are desired.

368 citations


Patent
06 Oct 1978
TL;DR: In this paper, a method and apparatus are disclosed for separating the information on documents involved in transactions such as accounting or banking transactions, for example, from the documents themselves and also for placing control of the processing of the transactions on this information instead of on documents themselves.
Abstract: A method and apparatus are disclosed for separating the information on documents involved in transactions such as accounting or banking transactions, for example, from the documents themselves and also for placing control of the processing of the transactions on this information instead of on the documents themselves. An image lift unit at a point of acceptance in a banking system disclosed herein generates an electronic image of each of the documents presented thereat and also tags the documents and the associated images with identification indicia to provide entry records which are processed at a processing center in the system to develop accounting source data and to perform accounting transactions with the accounting source data without using the documents themselves and, in turn, to produce data records which are recorded on an archival record along with the images for the associated documents. A point of payment within the system has a display unit for displaying (via the archival record) the data records and images associated with the documents for making acceptance or rejection decisions with regard to the documents and also has a printer for making copies of these documents.

336 citations


Journal ArticleDOI
01 Apr 1978
TL;DR: An overview of present computer techniques of partitioning continuous-tone images into meaningful segments and of characterizing these segments by sets of "features" is presented.
Abstract: An overview of present computer techniques of partitioning continuous-tone images into meaningful segments and of characterizing these segments by sets of "features" is presented. Segmentation often consists of two methods:boundary detection and texture analysis. Both of these are discussed. The design of the segmenter and feature extractor are intimately related to the design of the rest of the image analysis system?particularly the preprocessor and the classifier. Toward aiding this design, a few guidelines and illustrative examples are included.

265 citations


Journal ArticleDOI
TL;DR: The derivation of moment equations and a method of moment measurement are described and a threshold sequence and decision rules are developed and implemented in the matching of radar to optical images using a hierarchical search technique with the invariant moments as similarity measures.

246 citations


Book
01 Feb 1978
TL;DR: The purpose of this monograph is to introduce and discuss the principles of continuous image Mathematical Characterization, as well as some of the techniques used in two-dimensional image reconstruction, which have been developed in the context of 3D image reconstruction.
Abstract: Preface Acknowledgments PART 1 CONTINUOUS IMAGE CHARACTERIZATION 1 Continuous Image Mathematical Characterization 11 Image Representation 12 Two-Dimensional Systems 13 Two-Dimensional Fourier Transform 14 Image Stochastic Characterization 2 Psychophysical Vision Properties 21 Light Perception 22 Eye Physiology 23 Visual Phenomena 24 Monochrome Vision Model 25 Color Vision Model 3 Photometry and Colorimetry 31 Photometry 32 Color Matching 33 Colorimetry Concepts 34 Tristimulus Value Transformation 35 Color Spaces PART 2 DIGITAL IMAGE CHARACTERIZATION 4 Image Sampling and Reconstruction 41 Image Sampling and Reconstruction Concepts 42 Monochrome Image Sampling Systems 43 Monochrome Image Reconstruction Systems 44 Color Image Sampling Systems 5 Image Quantization 51 Scalar Quantization 52 Processing Quantized Variables 53 Monochrome and Color Image Quantization PART 3 DISCRETE TWO-DIMENSIONAL PROCESSING 6 Discrete Image Mathematical Characterization 61 Vector-Space Image Representation 62 Generalized Two-Dimensional Linear Operator 63 Image Statistical Characterization 64 Image Probability Density Models 65 Linear Operator Statistical Representation 7 Superposition and Convolution 71 Finite-Area Superposition and Convolution 72 Sampled Image Superposition and Convolution 73 Circulant Superposition and Convolution 74 Superposition and Convolution Operator Relationships 8 Unitary Transforms 81 General Unitary Transforms 82 Fourier Transform 83 Cosine, Sine and Hartley Transforms 84 Hadamard, Haar and Daubechies Transforms 85 Karhunen-Loeve Transform 9 Linear Processing Techniques 91 Transform Domain Processing 92 Transform Domain Superposition 93 Fast Fourier Transform Convolution 94 Fourier Transform Filtering 95 Small Generating Kernel Convolution PART 4 IMAGE IMPROVEMENT 10 Image Enhancement 101 Contrast Manipulation 102 Histogram Modification 103 Noise Cleaning 104 Edge Crispening 105 Color Image Enhancement 106 Multispectral Image Enhancement 11 Image Restoration Models 111 General Image Restoration Models 112 Optical Systems Models 113 Photographic Process Models 114 Discrete Image Restoration Models 12 Image Restoration Techniques 121 Sensor and Display Point Nonlinearity Correction 122 Continuous Image Spatial Filtering Restoration 123 Pseudoinverse Spatial Image Restoration 124 SVD Pseudoinverse Spatial Image Restoration 125 Statistical Estimation Spatial Image Restoration 126 Constrained Image Restoration 127 Blind Image Restoration 128 Multi-Plane Image Restoration 13 Geometrical Image Modification 131 Basic Geometrical Methods 132 Spatial Warping 133 Perspective Transformation 134 Camera Imaging Model 135 Geometrical Image Resampling PART 5 IMAGE ANALYSIS 14 Morphological Image Processing 141 Binary Image Connectivity 142 Binary Image Hit or Miss Transformations 143 Binary Image Shrinking, Thinning, Skeletonizing and Thickening 144 Binary Image Generalized Dilation and Erosion 145 Binary Image Close and Open Operations 146 Gray Scale Image Morphological Operations 15 Edge Detection 151 Edge, Line and Spot Models 152 First-Order Derivative Edge Detection 153 Second-Order Derivative Edge Detection 154 Edge-Fitting Edge Detection 155 Luminance Edge Detector Performance 156 Color Edge Detection 157 Line and Spot Detection 16 Image Feature Extraction 161 Image Feature Evaluation 162 Amplitude Features 163 Transform Coefficient Features 164 Texture Definition 165 Visual Texture Discrimination 166 Texture Features 17 Image Segmentation 171 Amplitude Segmentation 172 Clustering Segmentation 173 Region Segmentation 174 Boundary Segmentation 175 Texture Segmentation 176 Segment Labeling 18 Shape Analysis 181 Topological Attributes 182 Distance, Perimeter and Area Measurements 183 Spatial Moments 184 Shape Orientation Descriptors 185 Fourier Descriptors 186 Thinning and Skeletonizing 19 Image Detection and Registration 191 Template Matching 192 Matched Filtering of Continuous Images 193 Matched Filtering of Discrete Images 194 Image Registration PART 6 IMAGE PROCESSING SOFTWARE 20 PIKS Image Processing Software 201 PIKS Functional Overview 202 PIKS Scientific Overview 21 PIKS Image Processing Programming Exercises 211 Program Generation Exercises 212 Image Manipulation Exercises 213 Color Space Exercises 214 Region-of-Interest Exercises 215 Image Measurement Exercises 216 Quantization Exercises 217 Convolution Exercises 218 Unitary Transform Exercises 219 Linear Processing Exercises 2110 Image Enhancement Exercises 2111 Image Restoration Models Exercises 2112 Image Restoration Exercises 2113 Geometrical Image Modification Exercises 2114 Morphological Image Processing Exercises 2115 Edge Detection Exercises 2116 Image Feature Extraction Exercises 2117 Image Segmentation Exercises 2118 Shape Analysis Exercises 2119 Image Detection and Registration Exercises Appendix 1 Vector-Space Algebra Concepts Appendix 2 Color Coordinate Conversion Appendix 3 Image Error Measures Appendix 4 PIKS Compact Disk Bibliography Index

241 citations


Journal ArticleDOI
01 Aug 1978
TL;DR: The problem of threshold evaluation is addressed, and two methods are proposed for measuring the "goodness" of a thresholded image, one based on a busyness criterion and the otherbased on a discrepancy or error criterion.
Abstract: Threshold selection techniques have been used as a basic tool in image segmentation, but little work has been done on the problem of evaluating a threshold of an image. The problem of threshold evaluation is addressed, and two methods are proposed for measuring the "goodness" of a thresholded image, one based on a busyness criterion and the other based on a discrepancy or error criterion. These evaluation techniques are applied to a set of infrared images and are shown to be useful in facilitating threshold selection. In fact, both methods usually result in similar or identical thresholds which yield good segmentations of the images.

234 citations


Dissertation
01 Aug 1978
TL;DR: In this article, the distribution of surface orientation and reflectance factor on the surface of an object can be determined from scene radiances observed by a fixed sensor under varying lighting conditions, which have potential application to the automatic inspection of industrial parts, the determination of the attitude of a rigid body in space and the analysis of images returned from planetary explorers.
Abstract: : Distribution of surface orientation and reflectance factor on the surface of an object can be determined from scene radiances observed by a fixed sensor under varying lighting conditions. Such techniques have potential application to the automatic inspection of industrial parts, the determination of the attitude of a rigid body in space and the analysis of images returned from planetary explorers. A comparison is made of this method with techniques based on images obtained from different viewpoints with fixed lighting. (Author)

232 citations


Journal ArticleDOI
TL;DR: An adaptive quantization scheme, based on histogram peak sharpening, was applied to two of the images; the results do not seem to be as good as those obtained using variable thresholding.


Journal ArticleDOI
TL;DR: A general expression for the signal-to-noise ratio (SNR) has been developed for the URA as a function of the type of object being imaged and the design parameters of the aperture.
Abstract: Uniformly redundant arrays (URA) have autocorrelation functions with perfectly flat sidelobes. The URA combines the high-transmission characteristics of the random array with the flat sidelobe advantage of the nonredundant pinhole arrays. A general expression for the signal-to-noise ratio (SNR) has been developed for the URA as a function of the type of object being imaged and the design parameters of the aperture. The SNR expression is used to obtain an expression for the optimum aperture transmission. Currently, the only 2-D URAs known have a transmission of (1/2). This, however, is not a severe limitation because the use of the nonoptimum transmission of (1/2) never causes a reduction in the SNR of more than 30%. The predicted performance of the URA system is compared to the image obtainable from a single pinhole camera. Because the reconstructed image of the URA contains virtually uniform noise regardless of the original object's structure, the improvement over the single pinhole camera is much larger for the bright points than it is for the low intensity points. For a detector with high background noise, the URA will always give a much better image than the single pinhole camera regardless of the structure of the object. In the case of a detector with low background noise, the improvement of the URA relative to the single pinhole camera will have a lower limit of ~(2f)(-(1/2)), where f is the fraction of the field of view that is uniformly filled by the object.

Journal ArticleDOI
TL;DR: This note suggests that insight into this transformation may be enhanced by mapping P back into I, which is the usual Hough procedure for detecting curves embedded in digitized images.
Abstract: The usual Hough procedure for detecting curves embedded in digitized images involves a transformation from image space I to parameter space P. In this note we suggest that insight into this transformation may be enhanced by mapping P back into I.

01 Dec 1978
TL;DR: The major goal of the dissertation was the attempt to explain certain aspects of visual perception using a single concept: filtering.
Abstract: : Visual perception was investigated using spatial filters that are constrained by biological data. This report has four major parts: (1) a theoretical background to Fourier analysis; (2) a review of the literature relating to the spatial filtering characteristics of mammalian visual systems; (3) visual information processing in terms of spatial filtering; (4) the relating of contrast sensitivity to the identification of complex objects. The common denominator to all the investigations is spatial filtering. The major goal of the dissertation was the attempt to explain certain aspects of visual perception using a single concept: filtering. (Author)

BookDOI
01 Jan 1978
TL;DR: This volume is the Proceedings of the NATO Advanced Study Institute on Pattern Recognition and Signal Processing and contains what I believed to be a truly outstanding collection of papers which cover all major activities in both pattern recognition and signal processing.
Abstract: Both pattern recognition and signal processing are rapidly growing areas Organized with emphasis on many inter-relations between the two areas, a NATO Advanced Study Institute on Pattern Recognition and Signal Processing was held June 25th - July 4, 1978 at the ENST (Department of Electronics) in Paris, France This volume is the Proceedings of the Institute It contains what I believed to be a truly outstanding collection of papers which cover all major activities in both pattern recognition and signal processing The papers are grouped by topics as follows: I Syntactic Methods: paper numbers 1, 2 II Statistical Methods: paper numbers 3, 4, 5, 6 III Detection and Estimation: paper numbers 7, 8 IV Image Processing, Modelling, and Analysis: paper numbers 9, 10, 11, 12 V Speech Application: paper numbers 13, 14 VI Radar Application: paper number 15 Seismic Application: paper number 16 VII Biomedical Application: paper numbers 17, 18, 19 VIII IX Reconstruction From Projections: paper numbers 20, 21- X Signal Modelling and Application: paper numbers 22, 23, 24 XI NATO Pattern Recognition Research Study Group Report: paper number 25 It is my strong belief that there is a need for continuing interaction between pattern recognition and signal processing The book will serve as a useful text and reference for such a need, and for both areas Finally on behalf of all participants of the Institute, I would like to thank Drs T Kester and M N Czdas of NATO for their support

Book
01 Jan 1978
TL;DR: Applications of digital signal processing, Applications ofdigital signal processing , مرکز فناوری اطلاعات و £1,000,000; اوشاوρزی; کسراع رسانی ;
Abstract: Applications of digital signal processing , Applications of digital signal processing , مرکز فناوری اطلاعات و اطلاع رسانی کشاورزی

Journal ArticleDOI
TL;DR: The significance of this approach is that scene matching can be accomplish by the use of a computer even in cases which are difficult for humans or standard correlation techniques, and can be accomplished with greatly reduced computations.
Abstract: The general approach to matching two scenes by a digital computer is usually costly in computations. A match is determined by selecting the position of maximum cross correlation between the window and each possible shift position of the search region. A new approach which is logarithmically efficient is presented in this paper. Its logarithmic efficiency and computational savings will be demonstrated both theoretically and in practical examples. Experimental results are presented for matching an image region corrupted by noise and for matching images from optical and radar sensors. The significance of this approach is that scene matching can be accomplished by the use of a computer even in cases which are difficult for humans or standard correlation techniques, and can be accomplished with greatly reduced computations.

Journal ArticleDOI
TL;DR: In this paper, the application of Partial Differential Equation (PDE) models for restoration of noisy images is considered and performance bounds based on PDE model theory are calculated and implementation tradeoffs of different algorithms are discussed.
Abstract: Application of Partial Differential Equation (PDE) models for restoration of noisy images is considered. The hyperbolic, parabolic, and elliptic classes of PDE's yield recursive, semirecursive, and nonrecursive filtering algorithms. The two-dimensional recursive filter is equivalent to solving two sets of filtering equations, one along the horizontal direction and other along the vertical direction. The semirecursive filter can be implemented by first transforming the image data along one of its dimensions, say Column, and then recursive filtering along each row independently. The nonrecursive filter leads to Fourier domain Wiener filtering type transform domain algorithm. Comparisons of the different PDE model filters are made by implementing them on actual image data. Performances of these filters are also compared with Fourier Wiener filtering and spatial averaging methods. Performance bounds based on PDE model theory are calculated and implementation tradeoffs of different algorithms are discussed.


Journal ArticleDOI
TL;DR: It is found that preprocessing of the images via a gradient operator improves the registration performance in agreement with a derived optimal processor based upon image and temporal difference characteristics.
Abstract: An experimental comparison of several similarity measures and preprocessing techniques used for the registration of temporally differing images is carried out. It is found that preprocessing of the images via a gradient operator improves the registration performance. This is in agreement with a derived optimal processor (described in the Appendix) based upon image and temporal difference characteristics.

Journal ArticleDOI
TL;DR: A digital video image processor (VIP) has been constructed and is presently being tested and used in a variety of preclinical medical imaging situations and examples of in vivo K-edge and time-dependent subtraction images are presented.
Abstract: A digital video image processor (VIP) has been constructed and is presently being tested and used in a variety of preclinical medical imaging situations. Details of its design are discussed. The VIP can digitize, store and process images from a conventional radiographic TV fluoroscopy system. From these images a variety of subtraction images can be formed and displayed in real time at video rates. These subtraction images include: K-edge images, time dependent subtraction images, tomographic, and K-edge tomographic images. Examples of in vivo K-edge and time-dependent subtraction images are presented.

Journal ArticleDOI
TL;DR: The approach presented here formulates the contour extraction problem as one of minimum cost tree searching as well as applying the Zigangirov-Jelinek (ZJ) stack algorithm to two real biomedical image processing problems.

Proceedings Article
01 Jan 1978

Journal ArticleDOI
TL;DR: These local adaptive image processing methods are constructed by sectioning the image and applying a modified MAP restoration algorithm and are shown to be effective in processing nonstationary images.
Abstract: Locally adaptive image processing methods are constructed by sectioning the image and applying a modified MAP restoration algorithm. These local algorithms are shown to be effective in processing nonstationary images. The algorithms can work in both signal-independent and signal-dependent noise. The gains achieved by local and signal-dependent processing are analyzed.

Journal ArticleDOI
TL;DR: In this paper, the authors extended sectional methods in image processing to the processing of degradations produced by space-variant point spread functions and applied them to image segmentation.
Abstract: Previous work on sectional methods in image processing is extended to the processing of degradations produced by space-variant point spread functions.

Journal ArticleDOI
TL;DR: The tedious numerical computations associated with the calculation of partially coherent imagery are alleviated by a method which uses dimensionless coordinates and takes advantage of the properties of the Fourier transform.
Abstract: The tedious numerical computations associated with the calculation of partially coherent imagery are alleviated by a method which uses dimensionless coordinates and takes advantage of the properties of the Fourier transform. A 1-D periodic object function can model many objects of practical interest, including nonperiodic objects. The properties of a given optical system are described in terms of the transmission cross coefficient. For aberration-free systems with circular pupils, including annular sources (dark-field illumination), the cross coefficient can be calculated analytically. For aberrated or apodized systems, a 1-D approximation can be used. The effect of a convolving slit in the image plane of a scanning microscope can also be included.

Journal ArticleDOI
P.R. Smith1
TL;DR: A set of computer programs is described which has been developed for processing electron micrographs of biological structures, and these programs provide facilities for efficient image storage, enhancement and display.

Patent
15 Nov 1978
TL;DR: In this paper, an X-ray contrast medium is injected into a peripheral blood vessel of the anatomical subject, with a timing such that the contrast medium appears in the Xray image subsequent to the mask time interval.
Abstract: Difference images, derived from an X-ray image of an anatomical subject, are produced in real time by directing X-rays through the anatomical subject to produce an X-ray image, converting the X-ray image into television fields comprising trains of analog video signals, converting the analog video signals into digital video signals, producing integrated mask digital video signals by integrating the digital video signals over a mask time interval, subtracting the integrated mask digital video signals from corresponding digital video signals of television fields subsequent to the mask time interval and thereby producing digital difference video signals, converting the digital difference video signals into analog difference video signals, and converting the analog difference video signals into a series of visible television difference images representing changes in the X-ray image subsequent to the mask time interval. The mask time interval preferably corresponds generally to at least one complete cardiac cycle of the anatomical subject. An X-ray contrast medium is preferably injected into a peripheral blood vessel of the anatomical subject, with a timing such that the contrast medium appears in the X-ray image subsequent to the mask time interval. In another embodiment, the integrated mask digital video signals are reconverted to analog form and are subtracted on an analog basis from the analog video signals produced subsequent to the mask time interval, to produce the analog difference video signals.

01 Mar 1978
TL;DR: In this paper, the authors describe a new procedure for tracking road segments and finding potential vehicles in imagery of approximately 1-3 feet per pixel ground resolution using a generalized digital map data base to aid in the interpretation of imagery.
Abstract: : This report describes a new procedure for tracking road segments and finding potential vehicles in imagery of approximately 1-3 feet per pixel ground resolution. This work is part of a larger effort by SRI International to construct an image understanding system for monitoring roads in aerial imagery. The overall effort is directed towards specific problems that arise in processing aerial photographs for such military applications as cartography, intelligence, weapon guidance, and targeting. A key concept is the use of a generalized digital map data base to aid in the interpretation of imagery. The primary objectives of the overall "knowledge-based road expert system" are to analyze images to accomplish the following: (1) find road fragments in low- to medium-resolution images; (2) track roads in medium- to high-resolution images; (3) find anomalies on roads; and (4) interpret anomalies as vehicles, shadows, signposts, surface markings, etc. The road tracking algorithm is started by indicating the center and direction of a road fragment found in low- to medium-resolution images. The nominal road width is supplied either from the data base or by an image analysis function that can determine the width of a road fragment. The road tracker produces two forms of output: a point list describing the track of the road center, and a binary image of all points in the road that are anomalous and might belong to vehicles. In the complete road-expert system, this image will then be analyzed to screen out false alarms and interpret the remaining anomalies.